As The World Rapidly Creates Data, TripleBlind’s Solution Offers Enterprises The Ability To Share Data Across Borders and Industries

During the last five years, the amount of data produced around the world has multiplied quickly. Analysts at IDC predict that by 2025, global data usage will reach an astounding 163 zettabytes. Enterprises have needed to quickly find ways to harness this data for business, enrichment and analytics benefits.

Mass amounts of data have made breaches and personal identity compromises a common occurrence, compelling business and healthcare entities to adopt their own version of regulations surrounding data collaboration, like HIPAA. Additionally, more countries and states around the world are creating separate, geo-based data privacy regulations. This year, in the United States alone, two states signed new data privacy regulations into law and seven others introduced pending privacy laws. Internationally, Brazil’s first comprehensive data protection regulation became enforceable August 2021 and China’s Personal Information Protection Law will enter into force in November 2021.

Companies are now required to ensure cybersecurity aspects of stored data, and compliance and governance surrounding data sharing. The pace of technology and its advancements is fast, but the regulatory and legislative process is slow, making it difficult for enterprises to ensure they are always collaborating compliantly.

TripleBlind’s CEO, Riddhiman Das, recently spoke with Jerry Buckley, a founding partner of Buckley LLP, and Jody Westby, a prominent data security consultant, for the ADCG Podcast to discuss the rapid evolution of data management and data governance. 

Das suggests that U.S. regulators can more accurately keep up with the rapid change in data worldwide by creating a simplified, overarching federal regulation for data collaboration. 

“Today enterprises are facing a hodgepodge of state privacy regulation, the main specific privacy regulation, and those often don’t have a lot of overlap. That is a significant hindrance to data liquidity,” said Das. “If data is the new oil, it is not flowing because of the inability to have clarity on how to make the data liquid.”


TripleBlind’s solution makes it possible for enterprises to share data collaboratively while ensuring they remain compliant with every data privacy regulation across borders and industries. TripleBlind is the only mathematically proven, privacy-first and privacy-centric approach to collaboration. Data can never be re-identified once it is shared and can only be used for its intended purpose.

Any data privacy regulation can be laid on top of TripleBlind, and it will act as the digital rights and rules of the data trade. This approach ensures that data remains private, but can also only be used for operations that are compliant with whatever regulations are in place for the transaction. 

The previous expectation for companies was to “not be evil” when it comes to data sharing. By eliminating any possibility for data misuse, companies “can’t be evil” when collaborating through TripleBlind. TripleBlind unlocks the possibility for enterprises to remain safer and more compliant than they would be if they were collaborating through various competing solutions for data sharing.

To listen to the full ADCG Podcast episode featuring Das, please visit HERE.

If you’re interested in learning more about how TripleBlind can help you unlock compliant data sharing across borders and enterprises, schedule a call or demo at

What’s Needed for Data Liquidity?

In our recent round table webinar with Okta’s Director, Corporate Counsel, Product and Privacy, Fatima Khan, Polsinelli’s Privacy Attorney, Liz Harding, and TripleBlind’s Co-Founder and CEO, Riddhiman Das, we covered the current state of privacy regulations and how enterprises are approaching private data sharing. Khan, Harding, and Das identified common themes around data collaboration, including:

  • Data localization and transfer restrictions
  • Data minimization 
  • Transparency 
  • Lawful Basis
  • Individual rights
  • Security

Not every law contains each theme, but it’s abundantly clear that these are the biggest hurdles for data sharing. There is a great concern for our society’s inability to enforce these regulations as well. Too often, enterprises rely on good faith that other parties involved won’t stray away from what’s agreed upon. Other parties could hold a copy of the raw data, run operations not approved by others, etc., and there’s no way of knowing. As a society, we are too technically advanced to leave our private, sensitive data in the hands of companies under little legal supervision.  

Transferring data from enterprise to enterprise has its challenges, and having to move data from one jurisdiction to another has historically been difficult and limits global data collaboration. Imagine the impact and growth we could have if we were able to share data from one country to another seamlessly?  

There are different policies between different countries, but there hasn’t been one solution to satisfy data residency regulations and laws everywhere. Other approaches like homomorphic encryption or secure enclaves haven’t been built to universally satisfy these laws. Ultimately, they fail to offer a solution that upholds individual rights, complies with the strictest regulations, requires little computational effort, and remains private and safe. 

TripleBlind was created to overcome these hurdles to data collaboration. TripleBlind’s solution is one-way encrypted and irreversible – meaning the data may never be reconstructed or re-identified. Via fine-grained permissions, TripleBlind ensures only authorized operations can occur, and works on any data or algorithm. It supports existing infrastructure with no specific hardware dependencies.

If it sounds too good to be true, reach out to us for a demo or free hands-on workshop at Read our Competing Solutions Blog Series if you’d like to learn more about TripleBlind’s superiority over other approaches like synthetic data.

TripleBlind Named a “Think Outside The Box” Solution for Moving Digital Medicine Forward

We were excited to see TripleBlind was included with other “Think outside the box” solutions in an article by John Halamka, M.D., president of Mayo Clinic Platform. The full article explores how data can be compliantly exchanged through TripleBlind’s cryptographic approach, and how once shared via TripleBlind’s one-way encryption, healthcare information remains private and cannot be reconstructed. 

As a thought leader surrounding all things pertaining to sharing healthcare data, John notes that TripleBlind “allows Mayo Clinic to test its algorithms using another organization’s data without either party losing control of its assets.” 

We agree with John 100% that the “magic” is “always about the math,” which is why TripleBlind’s solution has been mathematically tested and proven to keep data private. This market differentiator is a good example of why TripleBlind’s market traction is accelerating.

Check out an excerpt from John’s full article below, and read the full article here: 


Secure Computing Enclaves Move Digital Medicine Forward

At Mayo Clinic Platform, we are deploying TripleBlind’s services to facilitate sharing data with our many external partners. It allows Mayo Clinic to test its algorithms using another organization’s data without either party losing control of its assets. Similarly, we can test an algorithm from one of our academic or commercial partners with Mayo Clinic data, or test an outside organization’s data with another outside organization’s data.

How is this “magic” performed? Of course, it’s always about the math. TripleBlind allows the use of distributed data that is accessed but never moved or revealed; it always remains one-way encrypted with no decryption possible. TripleBlind’s novel cryptographic approaches can operate on any type of data (structured or unstructured images, text, voice, video), and perform any operation, including training of and inferring from AI and ML algorithms. An organization’s data remains fully encrypted throughout the transaction, which means that a third party never sees the raw data because it is stored behind the data owner organization’s firewall. In fact, there is no decryption key available, ever. When two health care organizations partner to share data, for instance, TripleBlind software de-identifies their data via one-way encryption; then, both partners access each other’s one-way encrypted data through an Application Programming Interface (API). That means each partner can use the other’s data for training an algorithm, for example, which in turn allows them to generate a more generalizable, less biased algorithm. During a recent conversation with Riddhiman Das, CEO for TripleBlind, he explained:

“To build robust algorithms, you want to be able to access diverse training data so that your model is accurate and can generalize to many types of data. Historically, health care organizations have had to send their data to one another to accomplish this goal, which creates unacceptable risks. TripleBlind performs one-way encryption from both interacting organizations, and because there is no decryption possible, you cannot reconstruct the data. In addition, the data can only be used by an algorithm for the specific purpose spelled out in the business agreement.”

How TripleBlind’s Data Privacy Solution Compares to Differential Privacy

Differential privacy is not a specific process like de-identification, but a property that a process can have. For example, it is possible to prove that a specific algorithm “satisfies” differential privacy. Informally, differential privacy guarantees the following for each individual who contributes data for analysis: the output of a differentially private analysis will be roughly the same, whether or not you contribute your data.

When computing on data via differential privacy, it adds stochastic deterministic noise to each data element that masks the actual data element. Stochastic refers to a variable process where the outcome involves some randomness and has some uncertainty. This might result in significant accuracy degradation, whereas TripleBlind’s one-way encryption algorithms don’t add any noise to the dataset that would impair results. 

Differential privacy is suitable for situations with a higher degree of tolerance for error. For example, Apple keyboard suggestions – Apple doesn’t need to know exactly what you’re typing, but needs to know in general what people are typing to offer reasonable suggestions. 

Apple itself sets a strict limit on the number of contributions from a user in order to preserve their privacy. The reason is that the slightly biased noise used in differential privacy tends to average out over a large number of contributions, making it theoretically possible to determine information about a user’s activity over a large number of observations from a single user. It’s important to note that Apple doesn’t associate any identifiers with information collected using differential privacy.

The majority of the other approaches to data collaboration we’ve covered only work for tabular /columnar data; including homomorphic encryption, secure enclaves, tokenization. These approaches face severe challenges when it comes to producing high-performance, accurate models on complicated datasets like x-ray image analysis. However, TripleBlind is the solution to this problem since these images are encrypted – complying with HIPAA regulations.


TripleBlind uses data from outside sources to be used in our private infrastructure to compute and develop accurate diagnostics. Our Blind AI Pipeline ensures that the original data cannot be reversed engineered and is compliant with HIPAA regulations.


If you’re interested in knowing more about how you can safely and efficiently share data , please email for a free demo. Don’t forget to follow TripleBlind on Twitter and LinkedIn for our latest updates. 

This is the final blog of our Competitor Blog Series where we compared TripleBlind’s technology to other approaches of data collaboration. If you missed the other blogs, you can check them out below!


Read other blogs in this series:

Business Agreements
Homomorphic Encryption
Synthetic Data
Tokenization, Masking and Hashing
Federated Learning

How TripleBlind’s Data Privacy Solution Compares to Tokenization, Masking and Hashing

Tokenization is the process of turning a piece of data, such as an account number, into a random string of characters called a token that has no meaningful value if breached. Tokens serve as a reference to the original data, but cannot be used to guess those values. 

Its use is gaining popularity, especially in the financial services industry. However, there are several limitations to this approach to data sharing compared to TripleBlind. 

When you tokenize a particular data element, you’ve lost the ability to compute on that data element. Let’s say you’re tokenizing a social security number; it will make aggregation and dataset joining tasks much more difficult because the same social security number can be stored as different data types in different datasets resulting in different token values.

However, with TripleBlind, your end result has higher accuracy with 100% data fidelity because all elements in the data are used for computation. Nothing is hidden, removed, or replaced. The data is used as-is while in complete compliance with the strictest regulations (such as GDPR, CCPA, and HIPAA). 

Let’s say you try a different but similar approach – masking or hashing. Masking has various approaches ranging from simple to complex. A simple method is to replace the real data with null or constant values. A slightly sophisticated approach would be to mask the data to retain the identity of the original data to preserve its analytical value. Masking always preserves the format, but there are risks of reidentification. 

A hash function is any function that can be used to map data of arbitrary size to fixed-size values. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table.

When masking or hashing medical data for an element like male or female, it isn’t that helpful because every instance of “male” will have the same value, and every instance of “female” will mask/hash to the same value. Therefore, you must remove the 18 HIPAA identifiers from the datasets entirely before its use. 

TripleBlind’s innovative solution allows all those HIPAA identifiers to remain in the dataset with a 0% chance of the data being reidentified at any point. These identifiers include important information for medical insights, such as biometric identifiers or facial images.


HIPAA Identifiers

1. Name 10. Account Number
2. Address 11. Certificate or License Number
3. Significant Dates 12. Vehicle Identifiers
4. Phone Numbers 13. Device Identifiers
5. Fax Numbers 14. Web URL
6. Email Address 15. IP Address
7. Social Security Number 16. Finger or Voice Print
8. Medical Record Number 17. Photographic Images
9. Health Plan Beneficiary Number 18. Other Characteristics that Could Uniquely Identify an Individual


Tokenization only works for tabular and columnar data, so most organizations end up combining different approaches like masking and tokenization to get the maximum value out of their data, but it doesn’t have to be this way. Our solution is a one-fits-all type.

To find out how TripleBlind works for your business, schedule a call or reach out for a free demo at

To learn more about how TripleBlind compares to other competitors and methods of data collaborations, follow us on LinkedIn and Twitter to be notified when we post the next installation in our Competitor Blog Series. Check out our previous blogs here!

How TripleBlind’s Data Privacy Solution Compares to Blockchain

​​Blockchain is a shared, immutable ledger that facilitates recording transactions and tracking assets in a business network. It’s most commonly associated with cryptocurrency, a record of transactions made in bitcoin or other cryptocurrencies that are maintained across several computers that are linked in a peer-to-peer network.

Blockchain has its advantages. It’s a great way to keep an audit trail of who might have done what to your data, but it’s not a good solution for data sharing in the long run. With blockchain, the data stored can still be accessible by certain individuals via a private key. 

With TripleBlind, all parties involved in data sharing will always know what is being done to their data. We provide audit trails of all operations, plus all parties must provide cryptographic consent to every operation done. There’s a fine-grained control of data and algorithm interactions where TripleBlind can manage individual attributes and record-level permissions on the data. This allows for accurate cryptographic auditability of every data and algorithm interaction without anyone ever seeing the raw data.

Sharing data through blockchain means it’s inherently public. It allows multiple tiers upstream and downstream to be transparent and highly visible – two words you don’t want associated with sensitive data.

Lastly, blockchain is not built for the future. It’s necessary to have an approach to data sharing that won’t come undone and leave businesses scrambling to use the next best solution. It’s costly, inefficient, and ineffective. 

TripleBlind is the future of data sharing and complies with the strictest of data privacy laws and regulations. It can be used around the globe, and our operations will automatically comply with local regulations such as GDPR since everything stays one-way encrypted during our process, and no one gets a copy of the raw data. 

To schedule a call or free demo to explore how TripleBlind can work for your business, please reach out to To keep up to date with our latest blogs, follow us on Twitter and LinkedIn!

Read other blogs in this series:

Business Agreements
Homomorphic Encryption
Synthetic Data
Tokenization, Masking and Hashing
Federated Learning
Differential Privacy

How TripleBlind Compares To Federated Learning

Federated Learning is a learning paradigm that allows multiple parties to collectively train a global model using their decentralized data without the need to centrally store it; and, thus, without the need to transmit it outside the owner’s infrastructure.

Google coined the term Federated Learning in 2016, and the company has since been at the forefront of AI training through this method. From a high level of abstraction, Federated Learning goes through the following steps:

  • A central server chooses an algorithm or statistical model to be trained. The server transmits the model to several data providers, often referred to as clients (consumers, devices, companies, etc.);
  • Each client trains the model on their data locally and shares updates with the server;
  • The server receives model updates from all clients and aggregates them into a single global model. The most common aggregation approach is averaging.

Federated Learning has the opportunity to be beneficial in both healthcare and financial markets, with the potential to create unbiased medians of large amounts of consumer information. In healthcare, trained models via Federated Learning can help with diagnosing rare diseases based on other patient data. In fintech, Federated Learning allows institutions to detect crime and risk factors within their collaboration network. 

Federated Learning only accesses the results and learnings based on the algorithms, which are then sent back to the server without sharing the actual data. It is meant to keep individual consumer data private. However, while Federated Learning allows for more privacy than has previously been possible with AI, there are downfalls when it comes to the model privacy and efficiency of collaboration through Federated Learning. 

Because Federated Learning requires each of the clients to train the model on their entire dataset locally, there is both a high computational load and high communication overhead.

When multiple parties collaborate through Federated Learning, the model through which the collaboration takes place is known to everyone involved, making it susceptible to several attacks that could lead to data leakage. Moreover, it also puts the actual model privacy at risk.

TripleBlind’s Blind Learning approach is superior to and more efficient than Federated Learning and offers a more secure and precise way to share data. With TripleBlind’s groundbreaking solution, de-identified data is shared through models in which TripleBlind and all other parties involved are blind to the model and the original data.


This comparison shows how private data shared via TripleBlind’s solution remains private and de-identified in the case of a data breach


Data sets are shared so that only relevant information to the collaboration can only be used for its intended purpose. By preventing reconstruction attacks, TripleBlind ensures there is no risk of the data ever being re-identified if there were to be a data breach.

We are comparing TripleBlind’s technology to other modes of data collaboration as part of our Competitor Blog Series. Stay up to date with TripleBlind on Twitter and LinkedIn to learn more. If you’re interested in knowing more about how collaborating using TripleBlind’s patented solution can safely and efficiently unlock privacy for you, please email for a free demo.

Read other blogs in this series:

Business Agreements
Homomorphic Encryption
Synthetic Data
Tokenization, Masking and Hashing
Differential Privacy

How TripleBlind’s Data Privacy Solution Compares to Synthetic Data

Synthetic data is a form of collaboration in which businesses can share information with each other to analyze it without sharing real customer or patient information. An obvious downfall of collaborating by sharing synthetic data is that businesses are sharing generic data sets and not real data; however, synthetic data is acceptable when real data is unnecessary.

For example, synthetic data may be used by a credit card aggregator to determine macro trends from the data because not every bank collaborates with them and not every credit card provider will offer data. In those situations, synthetic data would be acceptable to glean industry macro-trends from the data.

However, if a company wanted to determine if a customer deserves a particular credit limit or understand how a small part of the population’s microtransactions yield a certain insight, they would need real data.

Another problem with sharing synthetic data is that outlying data is often omitted, making the data set inaccurate or can later be identified through spear-phishing or cross-correlation.

TripleBlind is far superior to sharing synthetic data because businesses can fully analyze real data in order to understand real trends. TripleBlind’s solution allows for data collaboration without jeopardizing privacy or compliance. Data shared through TripleBlind’s solution remains de-identified, private and can only be used for its intended purpose.

As shown in the above chart, collaboration via synthetic data has a negligible impact in most categories where accuracy and compliance are necessary. On the contrary, TripleBlind’s solution fulfills criteria across the board, making it a superior way to share data.

To learn more about how TripleBlind compares to other competitors and methods of data collaborations, follow us on LinkedIn and Twitter to be notified when we post the next installation in our Competitor Blog Series.

If you’d like to schedule a call or free demo to explore how TripleBlind can work for your business, please reach out to


Read other blogs in this series:

Business Agreements
Homomorphic Encryption
Tokenization, Masking and Hashing
Federated Learning
Differential Privacy

How TripleBlind’s Solution Can Make Data Sharing in Healthcare More Horizontal

TripleBlind recently hosted a virtual roundtable discussion featuring thought leaders from Mayo Clinic and Novartis to explore the current state of data sharing in healthcare. TripleBlind’s co-founder and CEO, Riddhiman Das, was joined by Mayo Clinic’s Dr. Paul Friedman and Dr. Suraj Kapa and Sukant Mittal from Novartis. 

Current issues surrounding data sharing in healthcare

While the expansion of electronic medical records and technological advancements have led to vast amounts of health data, this data is not broadly shared due to concerns about personal identifiable information (PII) and protected health information (PHI).

When this data is not readily available to share and use, healthcare professionals cannot access information which would create a more equitable pool of patient data and lead to advancements in diagnosis and treatments. Doctors need a way to both respect patient privacy, but gain access to more comprehensive health histories. 

How the issue is currently being addressed

While complying with data privacy regulations, healthcare organizations are still doing all they can to ensure data pools are unbiased. 

Mayo Clinic currently validates independent cross populations – different ethnicities, races, etc. – within its own data sets. This task becomes more difficult when talking about a global population and the regulations that differ between different countries. 

Training data is essential to Mayo as they capture data from the broadest possible population. Mayo’s neural networks can detect subtle, interrelated patterns that translate the hidden signals the human body gives off all the time, but will not function properly if untrained. Today, Mayo has roughly 30 hospitals from four continents providing data and they are continually expanding as permitted.

In a perfect world, data sharing would be more horizontal

While institutions like Mayo work to remain unbiased and ethical, there remains a void globally across the healthcare industry to ethically and compliantly crowdsource patient information.

During the webinar with TripleBlind, Dr. Suraj Kapa mentioned that ideally, in the future of digital health, institutions could move away from monopolies of data and sharing data would be more horizontal. Organizations would be able to access data that reflects the broader concept of the world’s population rather than segmented, narrow cohorts of patients.

Compliantly sharing crowdsourced healthcare information in real time would create limitless possibilities and accelerate discovery and understanding for healthcare providers.

How TripleBlind can help healthcare institutions achieve this desired outcome

When it comes to private healthcare data, TripleBlind aims to enable the liquidity of this data in order to enable and foster innovation in healthcare.

TripleBlind’s groundbreaking solution allows highly-regulated enterprises like healthcare institutions to gain and share de-identified data without ever decrypting it. When de-identified data is shared, there is no chance of compliance issues or of the data being re-identified. TripleBlind enables institutions to leverage third-party data or allow third parties to use their data while guaranteeing that the data is going to be used for the stated purpose.

With TripleBlind’s technology, organizations can cover global ground rather than operating against the specific, narrow regulations that vary worldwide.


To learn more about how TripleBlind’s technology can open the door to compliant data sharing for your organization, please reach out to for a free demo. To watch a video of the roundtable featuring TripleBlind, Mayo Clinic and Novartis, visit here

How TripleBlind’s Data Privacy Solution Compares to Homomorphic Encryption

Homomorphic encryption is a technique that allows for computations to be done on encrypted data without needing a secret decryption key, allowing only the owner or those with the secret key to see the results of the computations. There are multiple applications in which fully homomorphic encryption can be applied, from something as simple as keeping a person’s Internet search history private from third-party marketers to more complicated computations such as those done with healthcare data. Homomorphic encryption is considered one of the more well-rounded encryption solutions in the market and has been adopted by tech giants like IBM and Microsoft.

However, homomorphic encryption’s most significant barrier to widespread use is its significant computation overhead and latency. In fact, according to IBM’s homomorphic encryption trials, it requires more than 42-times compute power and 20-times memory compared to other types of encryptions. 

Homomorphic encryption’s speed is not the only place it falls short compared to TripleBlind’s data privacy technology. Below is a comparison chart of the two solutions:

  • TripleBlind
  • Fast
  • Universal, cloud based
  • Future proof
  • Blind inference supports all non-linear operations, including comparisons
  • Requires all parties online
  • All parties consent to each use
  • Mathematical digital rights management
  • Homomorphic Encryption
  • Slow
  • High CPU needs
  • May be cracked in the future
  • Only supports basic algebraic operations
  • Operates offline
  • Doesn’t require consent of all parties for other uses
  • No digital rights management

There are other areas in which homomorphic encryption doesn’t stack up compared to TripleBlind, including:

As you can see in the above charts, homomorphic encryption falls short in too many categories to provide an enterprise with a complete solution. Enterprises would likely need one or more other solutions to have all the criteria fulfilled. 

Unlocking private data sharing with TripleBlind’s solution allows businesses to collaborate more fully, compliantly and across broader horizons than homomorphic encryption. To learn more about how TripleBlind compares to other competitors and methods of data collaborations, follow us on LinkedIn and Twitter to be notified when we post the next installation in our Competitor Blog Series.

If you’d like to schedule a call or free demo to explore how TripleBlind can work for your business, please reach out to