Namespace network

Below are some examples of how to use functions in the network namespace.


Want to expose your model to the network? Use network.serve:

import * as sm from '@shumai/shumai'
import { model } from './model'{
run_model: (_, input) => {
// output tensors are automatically serialized
return model(input)

A client can use network.tfetch (basically fetch but for Tensors):

import * as sm from '@shumai/shumai'

const input = sm.randn([128])
const url = 'localhost:3000/run_model'
const output = await, input)


Want to train your model over the network? Just add an endpoint for a backward pass to network.serve:{
run_model: (user, input) => {
const out = model(input)
// capture a backward pass
user.opt = (jacobian) => {
sm.optim.sgd(out.backward(jacobian), 1e-3)
return out
optimize_model: (user, jacobian) => {
// run it when that same user gives us a jacobian

And the client can feed the jacobian with network.tfetch:

const url = 'localhost:3000'
for (let i of sm.range(100)) {
const [input, ref_output] = get_data()
const output = await`${url}/run_model`, input)

// get the jacobian from a loss function
output.requires_grad = true
const loss = sm.loss.mse(output, ref_output)

// send that jacobian back
await`${url}/optimize_model`, output.grad)


Shumai provides wrapper for the above setup code. network.serve_model will create /forward and /optimize endpoints for you.

import * as sm from '@shumai/shumai'
import { model } from './model', sm.optim.sgd)

And the client can attach with network.remote_model, which attaches a hook to backward for automatic gradients.

import * as sm from '@shumai/shumai'

const model ='localhost:3000')

for (let i of sm.range(100)) {
const [input, ref_output] = get_data()
const output = await model(input)

const loss = sm.loss.mse(ref_output, output)

// async now, as it propagates through the network
await loss.backward()


Want to run more than just a trivial remote trainer? Below is a distributed model parallel and pipelined server. We invoke multiple remote models and then make our own model server.

import * as sm from '@shumai/shumai'

const A ='localhost:3001')
const B ='localhost:3002')
const C ='localhost:3003')

// no need to wrap this, autograd knows what's up
const weight = sm.randn([128, 128]).requireGrad()

function model(input) {
// this will run in parallel
const [a, b] = await sm.util.all(
// automatically pipelined (isn't async great?)
const c = await C(a)
return c.mul(b).matmul(weight)
}, sm.optim.sgd)

Same client as before :)

What about debugging?

All network.serve* methods automatically give us basic /statistics as JSON:

$ curl -s localhost:3000/statistics | jq
(env) bwasti@bwasti-mbp shumai % curl -s localhost:3000/statistics|jq
"forward": {
"hits": 1000,
"seconds": 0.12337932200005891
"optimize": {
"hits": 1000,
"seconds": 0.16975503499999103

but we can always add more:, sm.optim.sgd, {port:3000}, {
statistics: () => {
return {weight: weight.mean().toFloat32()}
$ curl -s localhost:3000/statistics | jq .weight

including recursively:, sm.optim.sgd, {port:3000}, {
statistics: async () => {
return {A: await (await fetch('localhost:3001/statistics')).json()}


Type Aliases


Generated using TypeDoc