2021. What an interesting year. With the world turned upside down by a pandemic that seemingly had its sights set on...
For NonStop community, Swarm Learning should capture your attention
By Justin Simonds, Master Technologist at Hewlett Packard Enterprise
Artificial Intelligence, machine learning and deep learning have become part of our vocabulary. From the news on various AI victories in chess, Go and poker to the many advances in natural language seen by our interactions with Siri, Alexa and Google assistant to the more and more autonomous vehicles, AI is advancing quickly. I have been very fortunate to have worked on a few projects.
Back in the 2007 timeframe, Keith Moore and I along with several others worked on a project code-named ‘FLATLAND” after the book by Edwin Abbot about a two-dimensional world (Keith thought of that)! We were playing around with text analytics and we were getting some good information out and had some prestigious pilots about ready to launch when we had the recession of 2008/2009 and everything stopped. Still we had a very interesting time working with HP Labs. I never lost my interest in AI and have done a talk or two on the subject at TBC and various TUG meetings.
This past year NonStop was scaling to support unemployment checks to support US workers out of work due to Covid-19. I had an article and presentation on that. NonStop was involved in one of the laboratories doing Covid-19 testing so we had our hand in, so to speak. The greater HPE was ‘all in’ and trying to assist during the pandemic. The HPE AI labs were pulling in all articles associates with the Covid-19 variant and made the facilities available to all medical personnel searching for a vaccine or just more information. It was a library of everything Covid related.
Additionally HPE AI teams were working with various Universities, Clinics and hospitals to model information around the pandemic. It was using a new technique called Swarm Learning. This is a fascinating area. In machine learning a system is given training information and develops a model based on the training data. Then test data, data the system has not seen, is analyzed and one can see how accurate the model is using the new data.
If the accuracy is high enough – perhaps greater than 95% – the system is considered trained and usable in a production environment. If the accuracy is too low, more training is required. Of course systems in production continue refining their models based on the new information coming in. What if those models and all that training could be shared? That is the basis for Swarm Learning. That in itself is exciting.
What really interested me was that what was shared was the model itself not the data that was used to create the model. So hospitals on the east coast could share what they learned with hospitals in the Midwest and on the west coast without exchanging any patient data. I see a lot of applications for this. For me this was a major jump in security since the data was not shared.
The model itself is just a collection of mathematical weights inside the units of the model. With Swarm Learning just those weights are shared. So no data is ever passed between the nodes. Participants in the Swarm get the benefit of the training without needing access to the data that was used to train the model. A very secure system.
I will be giving a high-level talk on the basics of Swarm Learning at this year’s NonStop TBC. If you’re not busy during that time, please join me for an overview of this fascinating new area in AI.