The Cryptographic Future of Privacy

Data sharing that does not involve raw data transmission.

Mathematically-enforced privacy for data and algorithms allows companies to collaborate around sensitive information without the privacy and legal risks present today.

TripleBlind’s patented technology applies a one-way transformation to data that can be used for authorized purposes only, keeping data in place and ensuring compliance with all regional and national privacy regulations.

Advanced Cryptography, Simple API

TripleBlind requires no knowledge of the underlying cryptography.

A simple, familiar API allows machine learning and data scientists to work as usual, but with privacy built into their work.

Built to Accelerate Responsible Innovation

Blind De-Identification

One-way transformations de-identify data without anonymizing it, allowing all data to be computable without being visible.

Blind Data Utilization Toolbox

A suite of data utilization tools equip data scientists and researchers with the ability to harness a variety of privacy primitives fit for any data sharing task.

How does TripleBlind compare to other technologies?

Privacy-enhancing technologies (PETs) and trusted execution environments (TEEs) have attempted to solve the private data sharing problem. We’re often asked how TripleBlind compares to what’s been done before.

Scroll down to read how TripleBlind stacks up.

Homomorphic Encryption

Homomorphic encryption allows operations on encrypted data to occur, but the mechanisms used cause significant and costly performance penalties.

Interested in learning more about TripleBlind. We’d love to chat! 

Speed
TripleBlind is a trillion times faster than homomorphic encryption and more versatile across industry and use case.
Future-Proof
Homomorphic Encryption is not known to be quantum safe, but TripleBlind holds a proof showing quantum resistance.
Practical Use

Homomorphic encryption is limited in both types of data and the operations which can be done with the data. 

The technology has been around since 2009 and is still far too slow for today’s performance demands for data-heavy tasks. 

Homomorphic encryption is impractical for time-sensitive tasks, especially in the healthcare industry where immediate results are often needed. 

TripleBlind enables all data operations to occur on any type of data, without adding speed penalties or requiring additional storage. In fact, some operations actually occur faster with TripleBlind than they would without any encryption.

Secure Enclaves

Secure enclaves are physical chips installed onto CPUs which enable confidential computing, a method of protecting data in use from other users or programs running on the same machine or cloud server. This approach can be useful when all the data and algorithms being used live on the same CPU or server, but the technology has many limitations.

Interested in learning more about TripleBlind. We’d love to chat!

Data Centralization

Secure enclaves require both the data and the algorithm to be stored in one place, which silos data behind data residency and GDPR fences. 

Secure enclaves work great for operations happening on one server, but do not solve the problem of enabling private distributed data operations across multiple organizations, servers, or country borders. 

TripleBlind allows users to operate on distributed datasets without ever moving the data or the algorithm, meaning global data collaborations can occur with complete regulatory compliance and reduced liability.

Updates

Because Secure Enclaves are hardware, it is impossible to update them. With an ever-changing landscape of evolving threats to privacy, this can prove to be potentially problematic. Technology moves fast, but so do those aiming to break it. 

As a software-only solution with no specific hardware dependencies, TripleBlind can update over the air.

Tokenization

Tokenization substitutes sensitive data values like social security numbers, age, gender, and other HIPAA identifiers with non-sensitive ‘tokens’ which can be used to reference back to the sensitive data element, but are not utilized in data operations. This approach leaves valuable data out of the computations and is not fully private, as the de-identification is reversible.

Interested in learning more about TripleBlind. We’d love to chat!

Privacy

Our solution does not use tokenization, or any hashing / masking to de-identify data. Solutions reliant on tokens are not fully private, as it has been proven many times over that decryption and reidentification is still possible.

Data Fidelity

Tokenization reduces the fidelity of the data – one of TripleBlind’s key benefits and unique qualities is that we preserve 100% fidelity of the dataset or use in operations – every data element can be used! 

Name, age, gender, sex, etc can be extremely helpful for an algorithm to learn and predict important outcomes – especially healthcare.

Anonymization Concerns

Tokenization essentially anonymizes the data by removing sensitive elements, but the remaining elements can still lead back to a specific individual. TripleBlind De-Identifies without anonymizing – so the algorithm has all the information it needs, but sensitive PHI, PII, and PFI are never exposed.

Synthetic Data

Synthetic data is artificial data generated algorithmically rather than by real-world measurement. The practice unintentionally removes key relationships found in the subtleties of datasets.

Interested in learning more about TripleBlind. We’d love to chat!

Real Data

The use of synthetic data is a response to not having enough real data. TripleBlind’s tools allow GDPR and data residency compliant sharing to occur, expanding our customers’ access to data around the world. With TripleBlind, private operations occur on real data, preserving relationships lost in synthetic data.

Scope of Analysis

Synthetic data can be useful for macro-level analysis, but reduces the chances of uncovering hidden relationships in the data.

Differential Privacy

Differential privacy is used to learn about a group or community without exposing any details on the individuals in the dataset.

Interested in learning more about TripleBlind. We’d love to chat!

Effects on Results

Differential privacy intentionally adds noise to the dataset, muddying the quality of the outputs and reducing its utility.

Computational Expense

Differential privacy is computationally expensive and adds too much noise to diverse datasets.

TripleBlind’s approach does not add significant computations and never adds noise to the data set, preserving the precision of results.

Compliance

Differential privacy does not solve regulatory privacy. Data is still fenced behind regulations like GDPR and data residency. TripleBlind enables data to be de-identified in place, used remotely, and fully comply with data privacy legislation.

Federated Learning

Federated learning is a process of training a machine learning model in a distributed way across different data providers that keeps data in place. But, the process is not fully private and exposes valuable IP in the algorithm.

Interested in learning more about TripleBlind. We’d love to chat!

Computation and Storage Intensive

Federated learning requires the full AI model to be exchanged, requiring gigabytes of information to be transmitted multiple times across parties, adding significant speed and storage overhead to the training. TripleBlind’s Blind Learning performs distributed model training without exchanging the full algorithm, and is faster than training a model locally.

Algorithm IP

Because federated learning ships the entire model to the data providers, the data providers get to see the entire model, exposing the model provider’s valuable IP. TripleBlind never exchanges full models and applies one-way transformations to the portion of the model which the data providers receive.

Privacy Concerns

In federated learning, the partially trained models are exchanged among the data providers. The first provider sends the model to the second provider after training it on their data and so on. Providers downstream from others can compare the untrained model weights to the trained weights to expose raw data.

TripleBlind only makes the first half of the model available to the data owners, and both the model and the data are protected at all times by one-way, impossible-to-decrypt transformation.

Schedule A Demo

TripleBlind is built on novel, patented breakthroughs in mathematics and cryptography, unlike other approaches built on top of open source technology. The technology keeps both data and algorithms in use private and fully computable.