Disruptive technology product strategy
Artificial Learning continue to advance our market strategy, Hardware for Deep Learning and Mobile Autonomy, to design products that ensure our disruptive technology crosses the chasm between early adopters and a mass market.
Many thanks to everyone who responded so far to our call for applications of machine learning on a board. We will soon release updated product designs reflecting what you told us. The call remains open – please share with others interested in machine learning hardware.
To discuss our strategy further, please contact us.
We have refined our outline design specifications to add more detail to the Machine Learning on a Board (MLoaB) product concept.
We have just opened a call for proposals for applications of machine learning on a single, easy-to-use board.
Please see our current calls page to join.
We are proud to say our Opportunity Execution Project made the front page of the most-voted list in Stanford University’s recent course on Technology Entrepreneurship.
Contact us through our front page if you want to discuss it with us.
Our technology leaves us many free parameters to develop ultra-efficient integrated circuits for machine learning in many deep algorithms. Help us shape and chose those parameters so we can design the best chip for your application.
Please visit our application calls page and answer no more than 6 questions to give us your view of machine learning and the opportunity ultra-efficient hardware would give you or contact us to discuss your needs.
People like Hermann Hauser, founder of ARM, and industry big hitters like IBM and Microsoft say machine learning is the next big wave in computing after smartphones.
But machine learning today consumes huge resources and is trapped in warehouses full of hot computers. In 2012, Google needed 16,000 processors burning megawatts of power just to learn to see cats in videos. This inefficiency severely limits machine learning applications in autonomous systems like robots, unmanned vehicles or the billions of sensors the Internet of Things will bring. Even with cheaper graphical processing unit (GPU) implementations, these systems would simply be too big and hot or tethered to a warehouse and power station to be able to learn independently.
So we see a great opportunity in bridging the efficiency gap.
We believe we know how. We are Artificial Learning Ltd, working with top UK universities to develop novel, ultra-efficient silicon chips specifically for machine learning.
We want to make machine learning thousands – perhaps millions – of times more efficient. Instead of needing a warehouse and a power station, our integrated circuit designs will enable powerful machine learning to be embedded in devices that can sit in the palm of your hand and run on batteries.
Machine learning is how Google, Apple and other cloud services provide searching and voice recognition. ML is how robots and other autonomous systems can best make sense of the world they inhabit. It is recognised as the best approach to analysing unstructured data like text, pictures, video or sound in order to categorise and recognise its content and to associate the content with meaning.
ML implementations cut into two architectures (biomimetic or theoretic) and two embodiments (hardware or software).
Artificial Learning’s focus is on theoretic hardware but our technology also has application in biomimetic hardware.