Predictive and classification technologies AI/ML/LLMs/Gen AI/ – Write-ups on the use of these technologies, and what they are good for

AI

AI & Privacy: Frenemies Forever

Craig GentryAn Interview with Craig Gentry, TripleBlind CTO

Craig Gentry is the CTO of TripleBlind. He is one of the most cited cryptographers in the world, best known for his invention of the first fully homomorphic encryption system, and his work at IBM Research to make homomorphic encryption much faster. His other inventions – such as aggregate signatures and his zero-knowledge proof protocol, Pinocchio – are widely used in cryptocurrencies. For his work, he won the prestigious Gödel Prize, the MacArthur “Genius” Award, and was named a Fellow of the International Association of Cryptologic Research. 

Q: Why is privacy important to AI?  

Privacy is a fundamental human right.  And, responsible A.I. is about making AI reflect our values, aligning AI to some extent so that it doesn’t become a vehicle to some sort of dystopian future. We want AI to respect our privacy and manage our data in a way that empowers us. 

An analogy to personal data is radioactive materials. There is a right way and wrong way to handle radioactive materials, just as there is a right way and a wrong way to handle private data. Both require handling according to a principle of minimal exposure, maintaining safety while still getting utility. AI aggregates and analyzes data, bringing it to critical mass. Whether AI leads us to the equivalent of a catastrophic exposure or beneficial infinite energy is largely up to us. AI can oppress us or empower us, and how to reconcile AI with human privacy is a big part of that equation.  

AI and privacy are like frenemies – superficially they seem adversarial, but if used properly they can make each other stronger.

Q: How do you determine solutions that ensure privacy of data, in a form that works for any model, any data?  

Handling data and using it in a private way is a much more complicated problem than handling data at rest or data in transit, which is basically solved through encryption. But we do have technologies and if we’re intentional about using them. Then, we can get AI and we can have privacy at the same time. In fact, AI and privacy are like frenemies – superficially they seem adversarial, but if used properly they can make each other stronger. AI can enable better privacy-preserving technologies, and privacy gives AI guardrails that make it more usable. AI helps solve AI’s own privacy problem. 

Privacy enhancing technologies include things like cryptography, which is my background. I did a lot of work on homomorphic encryption, which is a way to do a computation on data while it’s encrypted without decrypting it. I have this analogy of Alice’s jewelry box, where basically Alice is a jewelry store owner, and she wants her workers to make these rings and necklaces out of the gold, diamond and silver that she gets. But she doesn’t trust her workers. So, what does she do? She creates a lockbox with gloves in it. And this allows the workers to manipulate the raw materials (gold, diamonds, and silver) while inside the locked box to create the finished piece. And then, because her workers can’t unlock the box, they have to give it back to Alice, who has the keys and can extract the finished jewelry.   

Cryptography allows you to do that sort of thing with data. You can put it in a locked box, you can give it to the cloud, and the cloud can do a computation for you, and the cloud won’t learn anything about your data.  

We also have something called secure multiparty computation (SMPC), which is a more computationally efficient version of homomorphic encryption. 

And we have things like secure enclaves, which are trusted hardware environments where users can bring together a model and data in that safe space. Computation can happen, and neither the model owner nor the data owner will learn anything that they shouldn’t.  

We have all these technologies at our disposal. AI vastly amplifies our AI privacy problem, but it also gives us additional tools to solve it. 

Q: How can enterprises make privacy technologies easier to adopt? 

TripleBlind’s vision is about making these privacy enhancing technologies usable, not just for the enterprise, but ultimately for individuals, because that’s an open place in the market right now. Currently, you can’t expect to deploy these technologies without hiring some cryptographic or security expertise. And that’s not the way it should be. Ultimately, privacy should not come at a usability cost. Rather, privacy and governance should be about making technologies like AI more usable to enterprises.  

Our observation is that if you put these technologies in place, then that makes these models more usable for enterprise. And I’m talking about two layers of usability here: usability of AI and usability of privacy tools. And basically, the TripleBlind vision is to solve both of those problems. 

Q: What is the offering TripleBlind is bringing to market? How does it work? 

Ultimately, TripleBlind aims to be a zero-trust platform, a middle layer between the privacy-enhancing technologies and the AI applications that the enterprise wants to run. The enterprise shouldn’t have to think about all the details of how the privacy technologies are implemented. The TripleBlind zero trust platform will figure it out in a transparent way, an automatic way, depending on the nature of the data transaction, the scale of the transaction, etc. 

The zero-trust aspect means that there’s no perimeter anymore. The firewall is becoming obsolete. Rather than a firewall, enterprises now need an intelligence but porous perimeter, which includes an intelligent privacy-aware monitor that addresses the privacy concerns of the enterprise in real-time, and in a transparent way.