• May 19, 2024

How confidential computing could ensure generative adoption of AI

[ad_1]

Generative AI has The potential to change everything. It can inform new products, companies, industries, and even economies. But what makes it different and better than “traditional” AI could also make it dangerous.

Your unique ability to create has opened up a whole new set of security and privacy concerns.

Suddenly, companies have to ask themselves new questions: Do I have the rights to the training data? to the model? To the exits? Does the system itself have rights to data that is created in the future? How are rights to that system protected? How do I control data privacy in a model that uses generative AI? The list goes on.

Not surprisingly, many companies are treading lightly. Blatant security and privacy vulnerabilities, coupled with hesitancy to trust existing Band-Aid solutions, have pushed many to ban these tools completely. But there is hope.

Confidential computing, a new approach to data security that protects data while it is in use and ensures code integrity, is the answer to the more complex and serious security concerns of Long Language Models (LLMs). It is prepared to help companies harness the full power of generative AI without compromising security. Before we explain, let’s first take a look at what makes generative AI especially vulnerable.

Generative AI has the ability to ingest data from an entire company, or even an insight-rich subset, into an intelligent, queryable model that instantly provides new insights. This has great appeal, but it also makes it extremely difficult for companies to maintain control over their proprietary data and comply with evolving regulatory requirements.

Protection of training data and models should be the top priority; It is no longer enough to encrypt fields in databases or rows in a form.

This concentration of knowledge and the subsequent generative results, without adequate data security and trust control, could inadvertently weaponize generative AI for abuse, theft and illicit use.

In fact, employees are increasingly entering sensitive business documents, customer data, source code, and other pieces of regulated information into LLMs. Since these models are partially trained on new inputs, this could lead to significant IP leaks in the event of a breach. And if the models themselves are compromised, any content that a company has been legally or contractually obligated to protect could also leak. At worst, stealing a model and its data would allow a competitor or nation-state actor to duplicate everything and steal that data.

These are high stakes. Gartner recently found that 41% of organizations have experienced an AI privacy breach or security incident, with more than half being the result of a data compromise by an internal party. The advent of generative AI will surely increase these numbers.

Separately, companies must also keep up with evolving privacy regulations when investing in generative AI. In all industries, there is a great deal of responsibility and incentive to comply with data requirements. In healthcare, for example, AI-powered personalized medicine has enormous potential when it comes to improving patient outcomes and overall efficiency. But providers and researchers will need to access and work with vast amounts of sensitive patient data while remaining compliant, presenting a new dilemma.

To address these challenges and the rest that will inevitably arise, generative AI needs a new security foundation. Protection of training data and models should be the top priority; It is no longer enough to encrypt fields in databases or rows in a form.

In scenarios where the results of generative AI are used for important decisions, evidence of code and data integrity, and the trust it conveys, will be absolutely critical, both for compliance and for managing potential legal liabilities. There must be a way to provide watertight protection for the entire computation and the state in which it runs.

The advent of “confidential” generative AI

Confidential computing offers a simple, yet enormously powerful way to solve what would otherwise seem like an intractable problem. With confidential computing, data and IP are completely isolated from infrastructure owners and only accessible by trusted applications running on trusted CPUs. Data privacy is guaranteed by encryption, even during execution.

Data security and privacy become intrinsic properties of cloud computing, so much so that even if a malicious attacker breaches the infrastructure data, the IP and code are completely invisible to that bad actor. This is perfect for generative AI as it mitigates your security, privacy and attack risks.

Confidential computing has been gaining ground more and more as a security game changer. All the major cloud providers and chipmakers are investing in it, with leaders in Azure, AWS and GCP all proclaiming their effectiveness. Now, the same technology that is converting even the most staunch cloud holdouts could be the solution that helps generative AI safely take off. Leaders need to start taking it seriously and understand its profound impacts.

With confidential computing, companies gain the assurance that generative AI models only learn about the data they intend to use, and nothing else. Training with private data sets in a network of trusted cloud sources provides complete control and peace of mind. All information, whether it is an input or an output, remains fully protected and behind the four walls of a company.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *