Machine Learning on a Chip

Add a machine learning co-processor to your circuits.  Our unique chip technology gives us an unfair advantage in machine learning updates per watt per kilogram (Hz/W/kg) that beats all alternatives. 

Machine Learning on a Chip

Restricted Boltzmann Machines and Deep Belief Networks

Our proof-of-concept integrated circuit designs will implement general-purpose restricted Boltzmann machines (RBMs) and deep belief networks (DBNs) in single packages, offering the most compact, fast and efficient implementations available of these standard machine learning tasks.

Nominal device parameters

DeviceFunctionLayersNP / LayerWeights / NPUpdate rate / HzPower / W
FS8ARBM21151151.3 x 106<1
FS8BRBM210410002.0 x 109<1
FS8CRBM210610002.0 x 101120
FS412A6DBN61151157.9 x 106
<1
FS412C10DBN1010610001.0 x 1012100

The functional designs comprise layered rectangular arrays of node processors (NPs) operating in parallel.  Layers in the FS8A and FS412A devices are globally connected with each NP connected by weights to all NPs in adjoining layers.  Maximal connectivity in larger devices is a regular, locally dense and globally sparse pattern.  Actual connectivity is programmable within the maxima.  Weight values are automatically adjusted on chip during learning and are non-volatile on removal of power.

Ambitions

Our technology is flexible.  Beyond these initial designs, we plan to implement other machine-learning tasks in a family of ultra-efficient devices.  Please contact us to learn more.