NonStop Trends & Wins

Nonstop Trends and Wins

Machine Unlearning

There’s an old Star Trek episode called “Requiem for Methuselah” with a poignant ending. Captain Kirk has fallen in love with a woman who subsequently died and Kirk is mourning her deeply. He’s going through the motions as captain but is devastated. At the very end of the episode, Kirk falls asleep, and Spock comes over doing the Vulcan mind meld, and simply says “Forget”. A nice episode if you like Star Trek. It plays into my topic for this edition. Unlearning or selective forgetting. When Spock says “Forget,” he certainly doesn’t mean ‘How to run a Starship’ or ‘How to speak,’ it was a very selective forgetting of the specific memory that was torturing Captain Kirk. Selective forgetting or unlearning. An interesting topic.

The other day I received an email notification that an old friend and one-time colleague, TC Janes, high-lighted an article on LinkedIn. The article was on Machine Unlearning. As many of you know, I have been following AI for quite a while now and early on have been an advocate for explainability. That is the ability of an AI system to ‘explain’ why it made the decision it made. For those that might have seen my 2019 TBC talk “AI-ML-DL OV and History”, I used a rather amusing example from the University of California, Irvine (UCI). They had attempted to train a system to distinguish between a dog and a wolf. They seemingly had incredible hit rates for success, over 92%, and were congratulating themselves on a ‘well taught’ system. When they interpreted the ML weights in the Deep Learning model, they were frustrated to discover that what the machine ‘learned’ was that if there was snow in the picture, it was a wolf – not a snow dog. It is amusing but a great example of unintentional bias.

At that point, all you could do was scrub the datasets and go through a relearning process with better, unbiased datasets. For errors such as this the simplest solution has been to produce a new training model that identified problematic datasets and excluded them. One then retrains the entire model from scratch. While this method is still currently the simplest, it is excessively expensive and, of course, time-consuming. And what if you find something wrong in the next model? If you get something wrong, then you start over unless something can be developed to unlearn the specific dataset offender. Recent estimates indicate training a Machine Learning model costs about $4 million. Retraining may soon be an unaffordable option. The term ‘machine unlearning’ was coined back in 2015 so the idea has been around for a while.

See https://ieeexplore.ieee.org/document/7163042?arnumber=7163042&tag=1 for the first paper I’m aware of on this subject.  As mentioned, there are a number of legitimate reasons (besides screw-ups) for wanting a ‘forget’ capability.  Privacy is likely the top concern, and regulations, especially from Europe, are requiring companies to erase information about individuals, if requested.  There are also situations involving identity theft where the theft has left a lot of false information.  How is that cleansed from production ML models?  This takes us into security concerns which is always a factor but ‘what if’ a bad actor has compromised your training datasets?  You might have an inaccurate, biased model that they (the bad actors) could use advantageously.  Usability is a concern.  Take recommendation engines that provide recommendations based on your search history, a history in which you allowed a friend to use your laptop – someone with different tastes than you might generate unusual recommendations, to say the least.  How is that particular session your friend was on erasable?  Especially if that was integrated into a company’s AI/ML systems.

Datasets for machine learning come from a variety of sources, sometimes containing subsets or aggregated data.  Consider an ML system that gets training data from OLTP systems, an Operational Data Store, a Data Lake, and other NoSQL databases existing within the company.  I would be fairly sure that the OLTP data has fanned out to all those other data sources but likely in somewhat different forms.  If you need to forget a piece of information, tracking down the full data lineage is extremely complicated.

ML systems must be designed with the core values of quickly and completely forgetting sensitive data and highlighting its lineage for restoring privacy, security, and usability mentioned above. These new systems need to make this lineage visible to users. Ideally, this system would allow users to specify which data to forget and provide different levels of granularity. The effectiveness of these forgetting systems would be evaluated with two metrics described by the IEEE article: how quickly they can forget (timeliness) and how completely they can forget data (completeness). The higher these metrics, the better the systems are at restoring privacy, security, and usability.  And perhaps it goes without saying, but these metrics need to be better, faster, and less expensive than retraining the model from scratch.

Anyway, as if AI wasn’t complicated enough, I just wanted to highlight one more wrinkle.

Author

  • Justin Simonds

    Justin Simonds is a Master Technologist for the Americans Enterprise Solutions and Architecture group (ESA) under the mission- critical division of Hewlett Packard Enterprise. His focus is on emerging technologies, business intelligence for major accounts and strategic business development. He has worked on Internet of Things (IoT) initiatives and integration architectures for improving the reliability of IoT offerings. He has been involved in the AI/ML HPE initiatives around financial services and fraud analysis and was an early member of the Blockchain/MC-DLT strategy. He has written articles and whitepapers for internal publication on TCO/ROI, availability, business intelligence, Internet of Things, Blockchain and Converged Infrastructure. He has been published in Connect/Converge and Connection magazine. He is a featured speaker at HPE’s Technology Forum and at HPE’s Aspire and Bootcamp conferences and at industry conferences such as the XLDB Conference at Stanford, IIBA, ISACA and the Metropolitan Solutions Conference.

Be the first to comment

Leave a Reply

Your email address will not be published.


*