The Cryptographic Future of Privacy

Data collaboration that does not involve raw data transmission.

Mathematically-enforced privacy for data and algorithms allows companies to collaborate around sensitive information without the privacy and legal risks present today.

TripleBlind applies one-way encryption to data that can be used for authorized purposes only. Our privacy enhancing techniques keep data in place and ensure compliance with regional and national privacy regulations.

SOC 2 Type 1

SOC 2 is an industry standard for security compliance. TripleBlind has obtained SOC 2 Type 1 certification for our commitment to establish and follow security policies and procedures. Click below to learn more about TripleBlind security.

AICPA SOC 2 Type 1 Badge

how tripleBlind works

TripleBlind requires no knowledge of the underlying cryptography.

A simple, familiar API allows machine learning and data scientists to work as usual, but with privacy built into their work.

privacy enhancing computation

THE TRIPLEBLIND SOLUTION

The TripleBlind Solution is a simple-to-use API that allows your data to remain behind your firewall while it is made discoverable and computable by third parties for analysis and ML training.

Blind Compute Icon

Blind Compute

Mathematical techniques and privacy primitives used to execute a myriad of computations.

Blind Virtual Data Exchange API Icon

Blind Data & Algorithm API

Our method of securely connecting and managing processes performed with the solution.

Blind Data Tools Icon

Blind Data Tools

Data science tools your teams expect, like pre- processing and EDA, designed around privacy.

Blind AI Tools Icon

Blind AI Tools

AI model training and inference, on distributed private datasets.

Blind Query Icon

Blind Query

Tools for learning from protected datasets without exposing private data.

Blind Algorithm Tools Icon

Blind Algorithm Tools

Allow easy distribution of models while maintaining full control over your IP.

For A Detailed Description Of Our Solution, Book A Demo.

How does TripleBlind compare to other technologies?

Privacy-enhancing technologies (PETs) and trusted execution environments (TEEs) have attempted to solve the private data sharing problem. We’re often asked how TripleBlind compares to what’s been done before.

Scroll down to read how TripleBlind stacks up.

Homomorphic Encryption

Homomorphic encryption allows operations on encrypted data to occur, but the mechanisms used cause significant and costly performance penalties.

Interested in learning more about TripleBlind. We’d love to chat! 

Speed
TripleBlind is a trillion times faster than homomorphic encryption and more versatile across industry and use case.
Future-Proof
Homomorphic Encryption is not known to be quantum safe, but TripleBlind holds a proof showing quantum resistance.
Practical Use

Homomorphic encryption is limited in both types of data and the operations which can be done with the data. 

The technology has been around since 2009 and is still far too slow for today’s performance demands for data-heavy tasks. 

Homomorphic encryption is impractical for time-sensitive tasks, especially in the healthcare industry where immediate results are often needed. 

TripleBlind enables all data operations to occur on any type of data, without adding speed penalties or requiring additional storage. In fact, some operations actually occur faster with TripleBlind than they would without any encryption.

Secure Enclaves

Secure enclaves are physical chips installed onto CPUs which enable confidential computing, a method of protecting data in use from other users or programs running on the same machine or cloud server. This approach can be useful when all the data and algorithms being used live on the same CPU or server, but the technology has many limitations.

Interested in learning more about TripleBlind. We’d love to chat!

Data Centralization

Secure enclaves require both the data and the algorithm to be stored in one place, which silos data behind data residency and GDPR fences. 

Secure enclaves work great for operations happening on one server, but do not solve the problem of enabling private distributed data operations across multiple organizations, servers, or country borders. 

TripleBlind allows users to operate on distributed datasets without ever moving the data or the algorithm, meaning global data collaborations can occur with complete regulatory compliance and reduced liability.

Updates

Because Secure Enclaves are hardware, it is impossible to update them. With an ever-changing landscape of evolving threats to privacy, this can prove to be potentially problematic. Technology moves fast, but so do those aiming to break it. 

As a software-only solution with no specific hardware dependencies, TripleBlind can update over the air.

Tokenization

Tokenization substitutes sensitive data values like social security numbers, age, gender, and other HIPAA identifiers with non-sensitive ‘tokens’ which can be used to reference back to the sensitive data element, but are not utilized in data operations. This approach leaves valuable data out of the computations and is not fully private, as the de-identification is reversible.

Interested in learning more about TripleBlind. We’d love to chat!

Privacy

Our solution does not use tokenization, or any hashing / masking to de-identify data. Solutions reliant on tokens are not fully private, as it has been proven many times over that decryption and reidentification is still possible.

Data Fidelity

Tokenization reduces the fidelity of the data – one of TripleBlind’s key benefits and unique qualities is that we preserve 100% fidelity of the dataset or use in operations – every data element can be used! 

Name, age, gender, sex, etc can be extremely helpful for an algorithm to learn and predict important outcomes – especially healthcare.

Anonymization Concerns

Tokenization essentially anonymizes the data by removing sensitive elements, but the remaining elements can still lead back to a specific individual. TripleBlind De-Identifies without anonymizing – so the algorithm has all the information it needs, but sensitive PHI, PII, and PFI are never exposed.

Synthetic Data

Synthetic data is artificial data generated algorithmically rather than by real-world measurement. The practice unintentionally removes key relationships found in the subtleties of datasets.

Interested in learning more about TripleBlind. We’d love to chat!

Real Data

The use of synthetic data is a response to not having enough real data. TripleBlind’s tools allow GDPR and data residency compliant sharing to occur, expanding our customers’ access to data around the world. With TripleBlind, private operations occur on real data, preserving relationships lost in synthetic data.

Scope of Analysis

Synthetic data can be useful for macro-level analysis, but reduces the chances of uncovering hidden relationships in the data.

Differential Privacy

Differential privacy is used to learn about a group or community without exposing any details on the individuals in the dataset.

Interested in learning more about TripleBlind. We’d love to chat!

Effects on Results

Differential privacy intentionally adds noise to the dataset, muddying the quality of the outputs and reducing its utility.

Computational Expense

Differential privacy is computationally expensive and adds too much noise to diverse datasets.

TripleBlind’s approach does not add significant computations and never adds noise to the data set, preserving the precision of results.

Compliance

Differential privacy does not solve regulatory privacy. Data is still fenced behind regulations like GDPR and data residency. TripleBlind enables data to be de-identified in place, used remotely, and fully comply with data privacy legislation.

Federated Learning As A Privacy Enhancing Computation Strategy

Federated learning is a process of training a machine learning model in a distributed way across different data providers that keeps data in place. But, the process is not fully private and exposes valuable IP in the algorithm.

Interested in learning more about TripleBlind. We’d love to chat!

Computation and Storage Intensive

Federated learning requires the full AI model to be exchanged, requiring gigabytes of information to be transmitted multiple times across parties, adding significant speed and storage overhead to the training. TripleBlind’s Blind Learning performs distributed model training without exchanging the full algorithm, and is faster than training a model locally.

Algorithm IP

Because federated learning ships the entire model to the data providers, the data providers get to see the entire model, exposing the model provider’s valuable IP. TripleBlind never exchanges full models and applies one-way transformations to the portion of the model which the data providers receive.

Privacy Concerns

In federated learning, the partially trained models are exchanged among the data providers. The first provider sends the model to the second provider after training it on their data and so on. Providers downstream from others can compare the untrained model weights to the trained weights to expose raw data.

TripleBlind only makes the first half of the model available to the data owners, and both the model and the data are protected at all times by one-way, impossible-to-decrypt transformation.

Book A Demo

TripleBlind’s innovations build on well understood principles of data protection. Our innovations radically improve the practical use of privacy preserving technologies, by adding true scalability and faster processing, with support for all data and algorithm types. We support all cloud platforms and unlock the intellectual property value of data, while preserving privacy and enforcing compliance with HIPAA and GDPR.