# An introduction to the MXNet API — part 2

## Computation steps? You mean code, right?

That’s a fair question! Haven’t we all learned that “program = data structures + code”? NDArrays are our data structures, let’s just add code!

## Dataflow programming

“Dataflow programming” is a flexible way of defining parallel computation, where data flows through a graph. The graph defines the order of operations, i.e. whether they need to be run sequentially or whether they may be run in parallel. Each operation is a black box: we only define its input and output, without specifying its actual behaviour.

## The Symbol API

So now we know why these things are called symbols (not a minor victory!). Let’s see if we can code the example above.

`>>> import mxnet as mx>>> a = mx.symbol.Variable('A')>>> b = mx.symbol.Variable('B')>>> c = mx.symbol.Variable('C')>>> d = mx.symbol.Variable('D')>>> e = (a*b)+(c*d)`
`>>> (a,b,c,d)(<Symbol A>, <Symbol B>, <Symbol C>, <Symbol D>)>>> e<Symbol _plus1>>>> type(e)<class 'mxnet.symbol.Symbol'>`
`>>> e.list_arguments()['A', 'B', 'C', 'D']>>> e.list_outputs()['_plus1_output']>>> e.get_internals().list_outputs()['A', 'B', '_mul0_output', 'C', 'D', '_mul1_output', '_plus1_output']`
• e depends on variables a, b, c and d,
• the operation that computes e is a sum,
• e is indeed (a*b)+(c*d).

## Binding NDArrays and Symbols

`>>> import numpy as np>>> a_data = mx.nd.array(, dtype=np.int32)>>> b_data = mx.nd.array(, dtype=np.int32)>>> c_data = mx.nd.array(, dtype=np.int32)>>> d_data = mx.nd.array(, dtype=np.int32)`
`>>> executor=e.bind(mx.cpu(), {'A':a_data, 'B':b_data, 'C':c_data, 'D':d_data})>>> executor<mxnet.executor.Executor object at 0x10da6ec90>`
`>>> e_data = executor.forward()>>> e_data[<NDArray 1 @cpu(0)>]>>> e_data<NDArray 1 @cpu(0)>>>> e_data.asnumpy()array(, dtype=int32)`
`>>> a_data = mx.nd.uniform(low=0, high=1, shape=(1000,1000))>>> b_data = mx.nd.uniform(low=0, high=1, shape=(1000,1000))>>> c_data = mx.nd.uniform(low=0, high=1, shape=(1000,1000))>>> d_data = mx.nd.uniform(low=0, high=1, shape=(1000,1000))>>> executor=e.bind(mx.cpu(), {'A':a_data, 'B':b_data, 'C':c_data, 'D':d_data})>>> e_data = executor.forward()>>> e_data[<NDArray 1000x1000 @cpu(0)>]>>> e_data<NDArray 1000x1000 @cpu(0)>>>> e_data.asnumpy()array([[ 0.89252722,  0.46442914,  0.44864511, ...,  0.08874825,         0.83029556,  1.15613985],       [ 0.10265817,  0.22077513,  0.36850023, ...,  0.36564362,         0.98767519,  0.57575727],       [ 0.24852338,  0.6468209 ,  0.25207704, ...,  1.48333383,         0.1183901 ,  0.70523977],       ...,       [ 0.85037285,  0.21420079,  1.21267629, ...,  0.35427764,         0.43418071,  1.12958288],       [ 0.14908466,  0.03095067,  0.19960476, ...,  1.13549757,         0.22000578,  0.16202438],       [ 0.47174677,  0.19318949,  0.05837669, ...,  0.06060726,         1.01848066,  0.48173574]], dtype=float32)`
• data is loaded and prepared using the imperative programming model that we’re all very familiar with. We can even use any external library in the process (it’s just good old code!).
• computation is performed using the symbolic programming model, which allows MXNet not only to decouple code and data but also to perform parallel execution as well as graph optimisation.
• Part 3: the Module API
• Part 4: Using a pre-trained model for image classification (Inception v3)
• Part 5: More pre-trained models (VGG16 and ResNet-152)
• Part 6: Real-time object detection on a Raspberry Pi (and it speaks, too!)

--

--