The data ecosystem is broken. In the current market, if Company A wants to share data with Company B, it has to decrypt it, send it over the internet and then once received, Company B has to replicate it for use. Decrypting and duplicating data comes with multiple risks, including:
- Company A cannot put any restrictions on the use of the data,
- Both companies face liability concerns,
- Both companies are subjected to expensive and time-consuming contracts and negotiations,
Right now, the most popular solution to minimize risk for both companies A and B are secure enclaves. Secure enclaves enable confidential computing, a process that ensures different programs running on the same machine or cloud server cannot access one another’s memory, keeping data in use private. Secure enclaves act as a black box, keeping the data stored separately from other machine processes; subsequently protecting all of the data and code inside the enclave. However, secure enclaves have limitations.
Secure enclaves store data on a public cloud, which solves issues related to keeping data safe from company employees and third-party vendors with access to the same physical hardware. With secure enclaves in place, the possibility of an intentional or unintentional breach is minimized. However, they do not solve privacy challenges from regulations like HIPAA, GDPR and other government regulations. Even with secure enclaves, the path to regulatory compliance is costly and strenuous.
For instance, if a medical research lab wants to share patient data with a drug manufacturer using only secure enclaves, to be HIPAA compliant, the research lab has to remove the 18 PHI identifiers and be anonymized, consult third-party analysts, establish legal terms, negotiate BAA and good faith adherence to terms. Each of those steps cost money, with the last step putting the data at risk of abuse.
Secure Enclaves Do Not Solve Data Privacy Issues on Their Own; TripleBlind Does
As stated above, secure enclaves have been an effective solution for protecting data, but they are limited due to the fact that both the data and algorithm must be in the same physical location. TripleBlind does not have those same constraints. With TripleBlind, enterprises are not restricted by the physical location of their data or algorithm.
By itself, confidential compute is expensive, time intensive and complex. Pairing it with TripleBlind’s Blind Data Utilization Toolbox, simplifies data regulation compliance and eliminates much of the work and cost associated with achieving data de-identification.
By itself, TripleBlind can ensure compliance with any data privacy law or regulation. When combined with secure enclaves, TripleBlind creates a thorough approach to ensure sensitive data is never accessible by unauthorized users, programs, applications or companies at any stage of the data lifecycle.
Comparison of TripleBlind and Secure Enclaves
|TripleBlind||Secure Enclaves / Confidential Compute|
|Does not require movement of data residing in multiple locations or countries||Requires data to be compiled in one place|
|Real time data de-identification with Blind De-Identification||No de-identification; requires manual anonymization and tokenization|
|Allows for easy aggregation of data from multiple sources while enforcing regulations||Requires a great deal of paperwork, BAA, resources, and time|
|Enables data operations to occur across the world from anywhere||Does not allow operations on European data to take place from the US|
|Allows for keeping the raw data in the country during operations||Data must be moved so that the algorithm and the data reside on the
|Brings digital rights to the data – enforce any regulation into the rights that govern the data||Does not enable digital rights on the data; trusted-but-curious parties can still access raw data|
|Easy to use via simple API||Difficult to use – requires complex lower level operations|
|Blind Learning protects training data leakage from the trained model||No model protection – training data leakage is still possible|
|Data residency compliant because raw data stays local||Does not solve data residency issues since data needs to be compiled
in one place
|Keeps algorithm intellectual property secure||Algorithm can be susceptible to reverse-engineering of intellectual property & training data|
|Eliminates the need for data sharing agreements||Data sharing agreements are a necessity for this approach|
|Reduces liability for receiver of data||Even if best practices are followed, the receiver of the data has the raw data which still could be exposed|
|Reduces liability for sender of data||Sender of data cannot control how the receiver uses it, takes on a lot of risk|
|Does not address shared hardware compute concerns on public cloud||Specifically addresses shared hardware compute privacy needs on the public cloud|
|Enforces permissions on how the data can be used||Does not enforce permissions on how the data can be used|
|Maintains an auditable log of every operation done to every piece of data||Does not keep a auditable log of data operations|
|Does not require tokenization of data – works with unstructured (untokenizable) data||Requires tokenization of data – not feasible
with unstructured data
|No limitations on operations on the data, as long as they are permissible||Accessing the GPU is difficult – training Neural Networks is a challenge|
|All software (no hardware dependencies) – vulnerabilities can be updated with a software patch||All hardware – vulnerabilities are well known and take years to patch|
Secure enclaves on their own are not enough to solve data privacy regulatory issues. Contact us today at firstname.lastname@example.org to learn about how TripleBlind provides enterprise data privacy unbounded by the physical location of the data or the algorithm.
We’re excited to announce that Sam Abadir has joined TripleBlind as its new Director of Partnerships! Abadir will be working with TripleBlind’s partners as well as helping customers understand the value of sharing data in ways that weren’t possible before.
“We’re honored to have Sam Abadir join us, having over two decades of knowledge in risk management,” said TripleBlind CEO Riddhiman Das. “Under his guidance, we hope to expand our partnership network to allow enterprises to collaborate in ways that were once unimaginable but are now necessary for the future of trust.”
Abadir has over a decade of consulting experience in the software solutions space. Prior to joining TripleBlind, he worked for NAVEX Global, offering integrated risk and compliance management software and services. He has dedicated his career to educating the world on governance, risk and compliance, and helping organizations use the data and content around them to better manage risk.
“I’m excited to join the TripleBlind team. Having spent years working with different companies managing risk and data sharing, TripleBlind’s solution is innovative and unlocks a lot of opportunities for companies to solve problems better while still ethically protecting data,” said Abadir.
During a recent 60 Minutes segment, Anderson Cooper investigated facial recognition software’s use in criminal investigations. Using complex mathematical algorithms, the facial recognition software compares a suspect’s face to potentially millions of other mugshots in a dataset.
However, these algorithms are built and trained using a finite number of photos of a very demographically unbalanced dataset. Meaning, when it compares an image to millions of others, it will have a more challenging time distinguishing Black, Asian and female faces in particular. Once a suspect’s face is run through the software, it provides possible matches and ranks them in order of probability.
In the case of Robert Williams, police argue that his wrongful arrest was due to sloppy work done by humans, not the software. Ideally, data analysts review the results provided by the software to determine which results seem accurate, and only then could it be used as a lead and a lead only. Police cannot arrest or charge individuals based on facial recognition alone. But, human error and biased AI have led to an unknown number of wrongful arrests, but we know of at least three individuals who have filed lawsuits due to error.
One issue lies with the lack of national guidelines around facial recognition. Local cities and agencies decide how to use it, who can run it, if formal training is needed and what kind of images can be used. In some cases, police photoshop a suspects’ facial features, especially when a suspect’s face is partially obscured. They edit someone else’s facial features to fill the gaps, but this also skews the accuracy of the results on top of using a problematic algorithm.
It’s been challenging to acquire datasets that are diverse, private, yet easily accessible.
With TripleBlind, we offer the ability for these algorithms to be built and trained on real data, not modeled data; that way, there are no inherent biases. Algorithms can begin to train and learn from datasets that represent real faces of people with a variety of facial features. This hasn’t been done yet due to the lack of solutions that offer complete data privacy and integrity while being efficient and cost-effective.
One of TripleBlind’s most significant features is its compliance with HIPAA, GDPR and other regulatory standards. We offer the sole solution that successfully de-identities genomic data. We ensure that no one can be re-identified and that the data is never copied and never decrypted. With TripleBlind, we can start filling in the gaps of needed, diverse data for facial recognition to be balanced and trusted.
Agencies and cities are facing costly settlements for wrongful arrests. It’s unknown how many other people have been wrongfully arrested, given that some arrested individuals never find out that facial recognition led to their arrest. Using facial recognition is a controversial practice and will be the subject of many laws and regulations that could make cities vulnerable to more lawsuits.
In 2020, identity fraud losses exceeded $56 billion in the United States alone. This number includes $13 billion for traditional identity fraud, such as data breaches, and $43 billion for other types of identity fraud scams.
Financial services companies have been reluctant to collaborate for multiple reasons; including competitive pressures, concerns about antitrust exposure and additional concerns about data privacy. However, $56 billion is too large a number to ignore. A key reason identity fraud happens is any one financial institution has just a limited profile of its customers. The typical consumer has multiple accounts with multiple institutions. If financial institutions could collaborate and gain a holistic picture of their customers, they could develop more effective algorithms to combat identity fraud. This holds true for other illegal activity, such as money laundering schemes. In our recent Privacy Enables the Adoption of Open Banking blog, we discuss reasons banks and financial institutions are still reluctant to share data with competitors.
Solutions that enable and facilitate collaboration haven’t been up to the task. Legal agreements institutions attempt to put in place are complex, take a long time to negotiate and rely on the goodwill of the parties involved. Some technology solutions, such as homomorphic encryption, do enable data sharing while remaining in compliance with data privacy standards, but severely degrade the performance of financial institutions’ networks. Others, such as secure enclaves provide an incomplete solution.
TripleBlind’s solution addresses these issues and allows financial services competitors and partners alike to share data without needing to trust the recipient because the most sensitive information within each data set remains private. TripleBlind’s API-driven virtual exchange creates an environment where encrypted data can safely be shared and used by institutions without ever exposing them to the risks that come with handling raw data, ultimately reducing fraud, intentional or not, and ensuring higher levels of compliance.
One example of how TripleBlind’s solution could prevent credit card fraud would be for Bank 1, Bank 2 and Bank 3, to share encrypted data with a credit card fraud detection company using TripleBlind’s private AI infrastructure. If a customer has accounts with the three banks, it would be most beneficial for the fraud detection company to access spending habits from all three sources and then share data among them to ensure the customer’s finances are secure.
However, while Bank 1 wants to give the fraud detection company information regarding the consumer’s spending habits, Bank 1 is reluctant to share that data with Banks 2 or 3. TripleBlind’s technology would only give Bank 2 and 3 the essential information necessary to determine if the customer’s account has been compromised; and vice versa for data from the other two banks.
Additionally, the data can only be used for its agreed-upon purpose. So if Bank 1, Bank 2 and Bank 3 agree to share data for fraud detection, they cannot access it for additional operations, such as marketing activities.
Sharing data with TripleBlind allows competitors to collaborate for mutual benefit without giving up the proprietary data – everybody wins.
TripleBlind has already partnered with leaders in the healthcare and financial services industries to tackle their data sharing needs with ensured safety, including Mayo Clinic, BC Platforms and Snowflake. If you are interested in exploring how your company can increase your data sharing capabilities, please contact us for a free demo HERE.
TripleBlind is currently the only solution that effectively de-identifies genomic data. Its groundbreaking approach to data sharing involves de-identification via one-way encryption that allows for safe and compliant data sharing among healthcare institutions. The solution meets the legal definition of de-identification, and TripleBlind never hosts any data that is being shared.
TripleBlind unlocks the ability for healthcare organizations to share PHI, health records, genomic and other data, enabling data to be usable at its highest resolution without incurring an accuracy penalty. TripleBlind de-identifies data by splitting each record, randomly, byte-by-byte, automatically de-identifying it without anonymizing it. Because the random splits cannot be used to identify an individual, the data sharing remains compliant with privacy standards, like HIPAA and GDPR.
Blind de-identification via one way encryption provides many advantages over the five methods for data anonymization most frequently utilized today, the utmost being that blind de-identification does not alter the fidelity of the data. Apart from often being slow, expensive, and unclear as to if full sets of data are actually fully de-identified and secure, other methods of de-identification remain inferior to TripleBlind’s mode of blind de-identification.
- K-anonymization alters the fidelity of the data through two means: suppression (data masking); certain values of the attributes are replaced by an asterisk. All or some values of a column may be replaced by an asterisk; or generalization; individual values of attributes are replaced with a broader category, e.g., the value 19 might be replaced with <20,
- Pseudonymization replaces private identifiers with fake identifiers or pseudonyms,
- Data swapping (shuffling or permutation) rearranges the dataset attribute values so they do not correspond with the original records,
- Data perturbation modifies the original data set by rounding numbers and adding random noise, also known as differential privacy,
- Synthetic data is often used in place of altering the original dataset or using it as is and risking privacy, but even the best synthetic data is still a replica of specific properties of the original data.
One way encryption creates a clear path from data collection to data usage that is significantly faster, cheaper, seamless and compliant.
We have upcoming webinars that go into depth about our services so follow us on LinkedIn and Twitter for updates. If you have questions or would like a free hands-on demo, reach out to us at email@example.com.
In July 2020, the court of Justice of the European Union officially made their decision on Schrems II finding that the EU-U.S. Privacy Shield Framework, on which more than 5,000 U.S. companies rely to conduct trans-Atlantic trade in compliance with EU data protection rules, was invalid. Since then, companies have had to reevaluate their transatlantic data sharing operations through a case-by-case analysis, costing time and money to achieve the required level of compliance.
TripleBlind has the solution to this turmoil. We discussed this topic before the decision was made, our technology allows entities to comply with these new standards and achieve their business objectives regardless of location. Deploying TripleBlind enables enterprises to share data and collaborate with other enterprises with confidence, knowing that TripleBlind enables them to automatically enforce HIPAA, GDPR and other regulatory standards.
We built TripleBlind to remain future-proof by creating a solution that automatically complies with even the strictest standards. Our blind de-identification process is TripleBlind’s novel method of data de-identification via one-way encryption, allowing all attributes of the data to be used, even at an individual level, while eliminating any possibility of the user of the data learning anything about the individual. Meaning, data is legally de-identified in real time with practically 0% probability of re-identification.
TripleBlind enables the processing and analyzing of sensitive data without ever moving it across borders. The data always remains encrypted, de-identified and is completely blind to TripleBlind and data consumers.
See our graphic below for a visual summary of how TripleBlind solves the Schrems II turmoil.
TripleBlind has the only private, encrypted and de-identified aggregated analysis pipeline. EU data stays within boundaries, and enterprises are able to efficiently and cost effectively share all types of data, even data that traditionally can’t be de-identified, such as genetic data.
We have upcoming webinars that go into depth about our services so follow us on LinkedIn and Twitter for updates. If you have questions or would like a free hands-on workshop, reach out to us at firstname.lastname@example.org!
Recently, we came across this insightful article from the World Economic Forum, “What if we get tech right?” which covers emerging technologies, but one part in particular caught our attention. Benjamin Haddad, Director of Technology Innovation at Accenture, and Algirde Pipikaite, Lead, Strategic Initiatives for the Centre for Cybersecurity at World Economic Forum stressed the importance of designing data architecture embedded with privacy and security. Today, we often rely on ethics when it comes to data compliance, putting too much personal information at risk. But as we move closer and closer to data liquidity, laws are being proposed to control and protect sensitive data.
At TripleBlind, developing advanced mathematics to create an entirely new, comprehensive and streamlined approach to data privacy is our reason for being. TripleBlind’s cryptographic digital rights management allows fine-grained control of data and algorithm interactions with cryptographic consent needed for every operation. We never decrypt or copy the data and algorithms, meaning everyone involved, including data scientists or TripleBlind ourselves, never see the raw data and algorithms. Everything remains confidential and secure without losing the valuable assets of the data itself.
“This political debate over data residency is expected to gain as much importance as the one on foreign ownership of a country’s sovereign debt.”
TripleBlind also allows computations to be done on enterprise-wide global data while enforcing data residency regulations. We know data residency laws vary from country to country and keeping track and maintaining compliance is difficult and costly. TripleBlind enables convenient access to global data silos, all while maintaining compliance with even the strictest of data laws.
As we head toward a future where big data is going to be key to unlocking countless advancements and insights, we have to put the privacy and security of that data first. TripleBlind is at the forefront of addressing this issue and we will continue to set the industry standard for data sharing. To keep up to date with us, subscribe to our newsletter and follow us on LinkedIn and Twitter!
I was recently given the opportunity by insideBigData to provide a perspective on some of the possibilities present today for sharing regulated data. You can read it here as well as below where we’ve reproduced it in its entirety.
Harness the Opportunities of Sharing Regulated Data
Insights-rich but regulated or sensitive data is sitting in private data stores unleveraged and unmonetized by enterprises. In 2018, Gartner reported that nearly 97 percent of data sits unused by organizations. There are solutions available today that enable enterprises to share data and collaborate, but they are either cumbersome, slow, ineffective or dangerous – which is why the rate of data sharing remains so low. There are new solutions available that do allow enterprises to gain insights from enterprise data and address the weaknesses of current solutions, while concurrently enforcing regulatory standards such as HIPAA and GDPR, as well as data residency requirements in some regions, such as Southeast Asia, China and the Middle East.
Here are a few scenarios in which effective data collaboration would be beneficial.
On average, people own 5.3 accounts across different financial institutions. A person might have a checking and savings account with Wells Fargo, a credit card with Citibank and a mortgage with Chase. If Citibank detects potential fraud on the person’s credit card, there is currently little or no ability for Citibank’s fraud department to collaborate with Wells Fargo and Chase to get a comprehensive picture of the fraud – which would enable Citibank’s security team to identify and thwart the activity.
Approximately 1.2 billion clinical documents, such as patient records, are produced in the United States each year, comprising approximately 60% of all clinical data with each paper providing medical experts with a wealth of potentially life-saving insights and data. However, within any one healthcare system, these records are skewed by the demographics of the patients – in some parts of the country the skew might be toward older, whiter patients, in other part, younger, Hispanic patients. When these institutions develop algorithms to create diagnoses, they are impaired by this skewed data. Today, the solution is to physically ship anonymized data from other healthcare systems to create accurate algorithms, a long, slow and expensive process.
Airline Predictive Maintenance
Aircrafts supply chains are a trade secret for parts suppliers making current predictive models less accurate than they could be. Partnerships of various manufacturers are notoriously complex and often serve as a barrier in sharing data. But suppose they can privately run predictive models on the aircraft data and determine the remaining useful life of their aircrafts and parts without ever having access to the raw data sets. In that case, this can set a new precedent for the industry. The manufacturer networks will be able to share information from airlines they don’t have direct relationships with, all in compliance with local laws and protecting their intellectual property.
How One New, Breakthrough Solution Works
One new, breakthrough solution enables enterprises to gain insights from data without ever decrypting it. The process starts by privately aggregating data from multiple sources, such as different financial institutions or healthcare systems. It privately explores, selects and pre-processes relevant features for training, and then privately processes the encrypted data.
It then trains new, deep statistical models and then predicts on any private and sensitive data.
The training process features low compute requirements and low communication overhead.
Along with encrypted data, this new approach encrypts the algorithm. The algorithm is blind to the data fed through it and the data is blind to the algorithm executed upon it. And neither the data nor the algorithm is exposed to the solution itself – it is a triple blind answer to gain insights from sensitive data.
By incorporating algorithmic encryption, neither party can reverse engineer the algorithms and the algorithms cannot abuse the data. And, neither party can re-generate any of the original training dataset for neural networks
Compared to other approaches like homomorphic encryption or secure enclaves, this enterprise data privacy approach enables “digital rights” to the data – the ability to overlay rules on how the data may be used. This ensures that any regulation or other terms that govern the use of the data, can be baked into the digital rights management contract. This blind pipeline offers the highest privacy and security, lowest computational load, and the lowest communication overhead, with no one ever seeing the entire model. With a suite of tools that allows for even the most sensitive information to be shared among competitors, the use cases with this technology are endless. Being blind to all data and algorithms brings in the most visible results – ensuring that the data becomes “liquid” and can be used broadly.