Secure Computation: Today's Top Privacy-Enhancing Strategies

Secure Computation: Today’s Popular Privacy-Enhancing Strategies

For companies that handle sensitive data, compliance with privacy regulations is the bare minimum for expectations. Companies have moral and business-related obligations to ensure that any private data they collect is protected from unauthorized access and use.

At the same time, sensitive data is a massive source of revolutionary insights. Privacy-enhancing strategies are designed to enable the operationalization of sensitive data while still maintaining the privacy of individuals and the protection of sensitive digital assets. Whether hardware- or software-based, these strategies use different approaches to protect data while allowing for more value to be extracted for scientific, social, and commercial benefit. Following are three of today’s top privacy-enhancing strategies: tokenization, synthetic data, and trusted execution environments.



In business applications, tokenization can be used to outsource responsibility for handling sensitive data. Companies can store sensitive information in a third-party database and not have to dedicate the resources needed to oversee and handle this data.

While this is an obvious benefit, tokenization doesn’t address many security risks. The most prominent issue with tokenization is being able to trust a third party with access to sensitive data. While business associate agreements can be used to hold a third party liable for misuse, an unethical actor seeing the massive commercial value of a sensitive dataset could consider violating any agreements a comparatively small price to pay. 

Furthermore, tokenization adds a layer of complexity to an organization’s infrastructure. In the example of financial transactions, a customer’s account information must be de-tokenized and re-tokenized for authentication to occur. In situations involving massive dataset, such as the training of machine learning algorithms, this added layer of complexity translates to enormous computational costs.

Also, tokenization may not address digital rights management and compliance issues, especially when a third-party provider is storing sensitive data in another jurisdiction or country. While this strategy may be popular and effective for financial transactions, it isn’t well-suited to processing datasets and engaging in international data partnerships.


Synthetic Data

Collecting massive amounts of data for analysis can be a regulatory and logistical nightmare. One popular privacy-enhancing framework developed to address these myriad data challenges is synthetic data.

Unlike standard data that is collected from original sources, synthetic data is generated from the statistical properties of real data and often serves to augment or replace that real data in mission-critical applications.. 

Because synthetic data holds the promise of generating new insights and enabling powerful artificial intelligence technologies, it has become a highly regarded tool in industries that deal with sensitive information — finance and healthcare in particular.

Although synthetic data can be very useful, it does have major limitations. Synthetic data systems are not particularly adept at generating outlier data, and this means synthetic data often fall short of real-world data. An over-dependence on imprecise synthetic data could lead to false insights that are costly in business, and possibly deadly in healthcare situations.

Synthetic data is an effective strategy in use cases with a narrow focus.. In situations involving a wide distribution of outcomes, however, synthetic data often proves to be quite limited in value. This reality is particularly problematic given the fact that many narrow cases have already been defined or studied at this point, and wider-scope studies are now of greater interest and importance.

The generation of “new data” from the statistical properties of sensitive data can also be fraught with challenges. If the original data set is significantly biased, problematic bias will likely be passed into the synthetic dataset. Addressing potential bias in synthetic data requires specialized knowledge of context around the data, and thus the need to address bias means decreased practicality as a privacy-enhancing framework. 

Furthermore, it has been shown possible to identify real people based on information in a synthetic dataset, especially if the system used to generate the data set is flawed. Currently, this isn’t a widespread problem, but if synthetic data is more broadly adopted, reverse-engineering private data could become a more attractive option for wrongdoers.

In essence, synthetic data takes an imperfect approach to preserving privacy, resulting in limited actual utility.


Trusted Execution Environments

Trusted execution environments (TEEs) are physical hardware enclaves housing processing systems that are isolated from any processing performed by a main computer to allow for the protected storage and computing of sensitive information.

TEEs are designed to protect both the data and code running inside the environment. In data collaborations, TEEs can enable secure remote communications. They store, manage, and use encryption keys only within a secure environment, which limits the possibility of eavesdropping. Unfortunately, there are a number of issues associated with TEEs. Because these systems are mostly proprietary hardware assets, they do not readily support platform interoperability. This type of privacy-enhancing strategy can also be cumbersome, and using it can be like having a private sandbox on Mars: It’s a secure environment, but it’s difficult to get there.

TEEs are also not impervious to attack. A number of studies have revealed how cryptographic keys can be stolen, and side-channel attacks can be used to expose security vulnerabilities.

Because they are hardware-based, TEEs are not easily patched or updated – new hardware is required. Software, on the other hand, can be updated instantly over the internet, enabling patches to security vulnerabilities, bug fixes, and new functionality to be added in real time.

Finally, TEEs require data and algorithms to be physically aggregated on one machine or server. This is often impossible due to data laws which keep data locked in place. The use of TEEs for cross-border data collaboration could result in a violation of GDPR or data residency laws, resulting in steep fines and reputational damage.


A More Flexible and Practical Privacy-Enhancing Strategy

Many of the most popular privacy-enhancing strategies are effective for certain use cases. However, each one has significant limitations and vulnerabilities. The TripleBlind Solution is an elegant and flexible approach to privacy enhancement that can augment or even replace the top strategies in use today.

Available via a simple API as a software-based solution, our technology improves the practical use of privacy-enhancing technologies and addresses a wide range of use cases. Offering true scalability and faster processing than other options, our technology can unlock the intellectual property value of data while protecting privacy and supporting regulatory compliance.

Please contact us today to learn more about our superior privacy-enhancing solution.

Image of patients wearing masks with faces blurred for privacy

How Privacy Enhancing Computation Can Increase Collaboration Amid Surge of Healthcare Data Privacy Breaches

The Department of Health and Human Services’ Office for Civil Rights’ breach portal reveals 2021 was the worst year ever for healthcare data privacy breaches. Nearly 45 million healthcare records containing patients’ protected health information (PHI) were exposed across 686 healthcare breaches. While the number of incidents that occurred increased only 2.4% in 2021, the number of patients affected increased 32%. As healthcare systems, insurance carriers, medical device manufacturers and others create, store and share more sensitive patient data, the amount of data exposed with each breach increases.

Similar to findings in the State of Financial Crime report which show that financial services companies that generate and handle data are hypersensitive to cyberattacks and data privacy breaches, healthcare organizations that collaborate using data are experiencing the same vulnerabilities. As the presence of healthcare data proliferates across mobile devices and cloud networks to accommodate trends such as remote work and telehealth, healthcare data becomes vulnerable to privacy threats, which IT departments may not even be aware of.

The number of attacks against healthcare third-party vendors and business partners increased by 18% compared to 2020. When looking at the top healthcare security breaches of 2021, it’s clear there is a need for healthcare enterprises to dramatically improve the quality of their data privacy practices when collaborating with other healthcare systems, vendors, partners and related entities. 


Privacy-Enhancing Computation Allows Secure Collaboration with Partners

Privacy-enhancing computation (PEC) is designed to allow healthcare institutions to collaborate and innovate without giving up proprietary data. PEC solves for a broad range of data challenges and allows institutions to glean insights from data that has historically been inaccessible due to healthcare privacy regulations.

Here are seven examples of how PEC can increase collaboration and innovation despite the increased risk of healthcare data breaches:

  1. COVID created a need for telemedicine to be more widely used for radiology, increasing the number of reconstruction attacks to infer patient ID based on X-Ray images. Using X-Ray source images from Medical Imaging Centers where patient metadata has been obfuscated, Diagnostic AI Developers will have more quality data for training, making AI algorithm training on X-Rays more secure, more cost efficient and faster.
  2. By operating algorithms on de-identified data and without the risk of models being reverse engineered, hospitals and others who have developed highly-advanced diagnostics algorithms can license their algorithms for remote diagnostics, without exposing valuable IP.
  3. Because PEC-based operations enforce the appropriate privacy regulations (HIPAA, GDPR, CCPA, etc.), pharmaceutical companies and drug developers can use genomic data sequences to create life-changing drugs and vaccines.
  4. Because clinical trial participant data is protected by HIPAA, researchers often are not typically able to analyze or interact with trials until after the trials are completed. Using de-identified, real-time data throughout clinical trials, healthcare enterprises can conduct early indication trial reporting without violating regulations for blind and double-blind studies.
  5. As biobanks store data that spans across different hospital systems and legal jurisdictions, it can be challenging for companies to compliantly access that data due to differing privacy regulations. With access to a larger amount of diverse patient data from biobanks, pharmaceutical developers can improve their modeling and analysis.
  6. Using prescription data and sales information from pharmacies with shared customers, hospitals can gain more accurate insight into the medications that patients are actually taking to incorporate in their treatment and wider research.
  7. Prior to PEC, when combining multiple data types for analysis – including image, text, voice, video and more – data scientists needed to create a machine learning model for each type of data and manually combine those outputs to analyze. PEC allows for collaboration using any type of data, allowing healthcare enterprises to better create and train predictive and generalizable AI models.


TripleBlind has created the most complete and scalable solution for solving use cases and business problems that are ideal for Privacy Enhancing Computation. TripleBlind allows data users to compute on data as they normally would, without having to “see,” copy, or store any data. The TripleBlind solution is software-only, supports all cloud platforms and is delivered via a simple API. It unlocks the intellectual property value of data, while preserving privacy and ensuring compliance with HIPAA and GDPR.

Check out these recent blogs from the TripleBlind team to learn more about how privacy-enhancing computation can benefit the healthcare industry and increase data collaboration opportunities:

Contact us today to schedule a personalized demo of our innovative technology! To learn more from TripleBlind thought leaders, be sure to follow us on Twitter and LinkedIn.

Privacy-Enhancing Technologies: Who Should Care & Why?

Privacy-Enhancing Technologies: Who Should Care and Why?

Privacy-Enhancing Technologies: Who Should Care and Why?

Benefits and Challenges of Data Collaboration Hero Image

The Benefits and Challenges of Data Collaboration in Finance

The data that financial institutions collect from their clients and partners is extremely valuable. However, it only offers a limited perspective. Suppose banking and financial companies can collaborate with other organizations using their collected data. In that case, the resulting data collaboration can lead to powerful new insights and a wide range of resulting business benefits.

But what is data collaboration exactly? A good data collaboration definition is pooling insights from data across various sources to unlock valuable insights for all participants. However, how individual organizations define data collaboration has some variance, including whether the collaborations are happening internally or externally. In the financial services industry, these insights could lead to the development of innovative products, better customer service, and privacy-preserving analytics that may increase information-sharing effectiveness within national and international financial fraud prevention and regulatory compliance.

It is essential to point out that this type of collaboration requires overcoming many common data problems. There are problems with data access, data transformation, and data bias. Value-generating insights are increasingly uncovered through artificial intelligence, and this technology requires massive amounts of information from various data sources to be effective.

Simple Principles for Data Collaboration

A fundamental principle for extracting the most value out of data collaboration is the development of user-friendly workflows adopting data protection and governance.

Rather than having decision-makers sift through dozens of spreadsheets and workbooks to identify insights, top financial companies often rely on business intelligence dashboards and reports that display easily digestible information. These dashboards focus on tracking key metrics in the same way retail stock trading platforms provide information on an investor’s portfolio.

While these dashboards may appear simple, the technology behind them is not. Artificial intelligence processes large amounts of complex data to derive insights meaningfully. Most business intelligence platforms now incorporate AI features that help business analysts quickly make more informed decisions. This can potentially allow operational benefits like portfolio position reporting or resolving customer service tickets quickly.

Data used to drive business insights should be easy to understand and query. We may romanticize the idea of game-changing insights surfacing from extensive data analysis. Still, data-driven insights may confirm leaders’ suspicions based on professional experience and data quality. And, when an analytics platform does produce a surprising result, it’s essential to know the data sources from where the insights came from and have the ability to audit the data.

Data aggregation and validation are simpler when companies work with good data sources. Companies should also create a culture that supports a collaborative, constructive, and timely dialogue preventing valuable insights from being discarded and delaying critical decisions.

Benefits of Data Collaboration

  • Increased access to financial services
  • Better customer experience
  • Better financial products
  • Greater efficiency
  • Greater fraud protection
  • Efficient workforce distribution

According to a survey from Gartner, company leaders that promote data sharing and the dissolution of data silos often trigger a higher value return from their analytics teams. The research company also predicted that companies that share data will comprehensively outperform rivals that do not by 2023.

Data Collaboration and Increased Access To Financial Services

According to research from McKinsey, data collaboration can lead to economic value in several different ways, including increased access to financial services. When banking institutions understand what services customers need access to, when they need them, and how they prefer those services to work, they can deliver exactly what people need, when and how they need it.

Data Collaboration and Better Customer Experience

Likewise, data collaborations can yield a better customer experience, the same McKinsey research found. For example, identifying data patterns related to how long people wait to reach a representative on the phone, how long certain calls tend to last, and how often and at what point people tend to get tired of waiting on hold before hanging up can all inform staffing strategy to ensure that customers feel like they can reach a live person easily. 

Data Collaboration and Better Financial Products

According to a survey from Gartner, company leaders that promote data sharing and the dissolution of data silos often trigger a higher value return from their analytics teams. The research company also predicted that companies that share data will comprehensively outperform rivals that do not by 2023. Part of the reason for that may be that armed with data, banking institutions can engage in better decision-making about what kinds of financial products may do best with their existing and potential customers. 

Data Collaboration and Greater Efficiency

Data collaboration can mean a more agile rollout of new products and services through greater efficiency. For example, when an investment bank partnered with tech company Altimetrik, they were able to leverage internal data to improve their application development teams’ productivity by lowering the back-end requirements of new applications. That means that the banking organization can respond more quickly to changing customer demands with new applications and online services that can be developed and launched swiftly.

Data Collaboration and Greater Fraud Protection

Data collaborations can also address threats related to fraud and other criminal activity. When banking organizations have a more comprehensive picture of their clients’ internal and partner financial institutions’ financial transactions, it helps them identify suspicious activity within an expanded ecosystem. Data collaborations can also improve credit risk modeling, ESG portfolio construction and detection of financial fraud based on alternate data sources. Insights from data collaborations and privacy protection scenarios help financial services companies to protect data in use and transit. It supports the processing of data with confidentiality in artificial intelligence and business intelligence applications within untrusted computing environments like the public cloud.

Data Collaboration and Efficient Workforce Distribution

Earlier, we used the example of how data collaboration can help banking institutions determine appropriate staffing levels for inbound calls so that customers are not stuck with long wait times to speak with a live person. However, data collaboration can help with all facets of efficient workplace distribution. That may be particularly important when it comes to companies with many branches or offices, where shifts in staffing may make sense depending on the times of the year or other external factors.

The Challenges of Data Collaboration

Data collaboration needs a lot of data to provide useful insights. Unfortunately, there are many security and privacy challenges associated with data collaboration efforts. Companies do not want their financial information to be shared without discretion. Even if they did, financial institutions have strong business motives for keeping critical information to themselves and protecting their intellectual property while complying with privacy regulations.

Risks are involved when a company enters into a data collaboration with another organization. The data could be intercepted or misused by other participants during the collaboration process. This could be detrimental to both the organization and its customers. In October 2020, hackers breached a Facebook data partner to run targeted ads based on Facebook data for a money-making scam.

Additionally, the sharing of financial information could violate privacy regulations. In the United States, the Gramm-Leach-Bliley Act (GLBA), and CCPA outline several privacy guidelines for sharing an individual’s financial information.

Regulations related to the sharing of personally identifying information, such as date of birth and social security number. In Europe, the GDPR act strictly outlines how organizations may use personal data.

Finally, there are gray areas around the use of data on an individual — actions that are not necessarily illegal or immoral but potentially bear a reputation risk for business.

We Provide More Significant Data Privacy and Control for Data Collaboration

Financial institutions are looking to unlock insights to improve customer experience, increase market share, reduce risk, and drive innovative offerings through data collaboration using privacy-enhancing technologies. While approaches like masking, tokenization, differential privacy, and synthetic data can be helpful, the TripleBlind solution compares favorably with these and other privacy-enhancing technologies. Our innovations radically improve the practical use of privacy preserving technology by adding true scalability and faster processing with support for a majority of data formats and machine learning algorithms that can be deployed on cloud and on-premise platforms.

Our patented one-way encryption technology approach ensures that data and algorithms can never be decrypted and only permits authorized operations. Best of all, the TripleBlind Solution is available through a simple API, and we never take possession of any data, algorithms, or answers.

Book a demo today if you want to learn more about how our solution enables data collaboration.

Privacy Enhancing Technologies Webinar Banner Image

TripleBlind Experts to Highlight Optimal Privacy-Enhancing Technologies for Unlocking IP Value of Data in May 25 Webinar


Chris Barnett, VP of Partnerships & Marketing, TripleBlind
Tim Massey, VP of Product & Customer Success, TripleBlind
Chad Lagomarsino, Sales Engineer, TripleBlind



What if emerging privacy-enhancing technologies (PET) could reshape and accelerate an organization’s data-based innovation activities?

On May 25, TripleBlind will host a webinar to highlight how enterprises can select the optimal privacy-enhancing technologies (PET) to suit their specific business and collaboration needs. Three experts from the company will cover multiple techniques for privacy-enhancement, and offer guidance on how to evaluate and implement those techniques. 



Handling data effectively is among the biggest concerns for C-Suite leaders, compliance officers and data scientists in the healthcare and financial services industries. Issues surrounding data access, data prep, data bias challenges and compliance affect every business that leverages artificial intelligence (AI), machine learning, analytics or collaboration.

The emerging PET category represents a cohort of technological solutions that seek to ease the pains, pressures and risks involved in working with sensitive and protected data. 



Wednesday, May 25, 2022, 11 a.m. CT
“Privacy-Enhancing Technologies: Who Should Care and Why?”



Virtual, via Zoom
Participants can register here


Additional Resources


About TripleBlind

Combining Data and Algorithms while Preserving Privacy and Ensuring Compliance

TripleBlind has created the most complete and scalable solution for privacy enhancing computation.

The TripleBlind solution is software-only and delivered via a simple API. It solves for a broad range of use cases, with current focus on healthcare and financial services. The company is backed by Accenture, General Catalyst and The Mayo Clinic.

TripleBlind’s innovations build on well understood principles, such as federated learning and multi-party compute. Our innovations radically improve the practical use of privacy preserving technology, by adding true scalability and faster processing, with support for all data and algorithm types. TripleBlind natively supports major cloud platforms, including availability for download and purchase via cloud marketplaces. TripleBlind unlocks the intellectual property value of data, while preserving privacy and ensuring compliance with HIPAA and GDPR. 

TripleBlind compares favorably with other privacy preserving technologies, such as homomorphic encryption, synthetic data, and tokenization and has documented use cases for more than two dozen mission critical business problems.

For an overview, a live demo, or a one-hour hands-on workshop,



Madi Olivé / Valeria Carrillo
UPRAISE Marketing + Public Relations for TripleBlind

Craig Gentry, Chief Technology Officer

TripleBlind Appoints Encryption, Privacy and Blockchain Expert Craig Gentry as Chief Technology Officer

KANSAS CITY, MO – May 18, 2022 – TripleBlind, creator of the most complete and scalable solution for privacy enhancing computation, announces Craig Gentry as the new Chief Technology Officer. Craig will lead TripleBlind’s technology vision for expanding the most comprehensive privacy preserving technology in the industry.

Craig joins TripleBlind with more than 20 years of experience in cryptography, data privacy and blockchain, and has received numerous accolades for his research and advancements. This includes:

  • 2009 – After inventing the first fully homomorphic encryption scheme as part of his Ph.D., the Association for Computing Machinery (ACM) awarded him the ACM Doctoral Dissertation Award. This award is presented annually to the author of the best doctoral dissertation in computer science and engineering.
  • 2010 – Won the Association for Computing Machinery Grace Murray Hopper Award, which goes to an individual who makes a single, significant technical or service contribution before age 35. Apple inventor and legend Steve Wozniak received the award in 1979.    
  • 2014 – Awarded a MacArthur Fellowship, unofficially but commonly known as the Genius Grant, as a future investment in his originality, insight and potential.


Before joining TripleBlind, Craig served for three years as a research fellow at Algorand Foundation, an organization dedicated to fulfilling the global promise of the Algorand blockchain, designed to create a borderless global economy.  Prior, he spent 10 years in the Cryptography Research Group at the IBM Thomas J. Watson Research Center, where he worked with colleagues to bring previously theoretical privacy enhancing technologies – such as homomorphic encryption and zero-knowledge proofs – toward practically. Craig was introduced to cryptography as a researcher at DoCoMo USA Labs.

Craig holds a Ph.D. in Computer Science from Stanford University, a J.D. from Harvard Law School, and a B.S. in Mathematics from Duke University.

“TripleBlind is a leader in solving real business problems with Privacy Enhancing Computation. The addition of Craig Gentry to our leadership team will foster further innovation and accelerate development of groundbreaking technology,” said Riddhiman Das, CEO and co-founder of TripleBlind. “Craig is a luminary in this space, and I’m honored to have him lead and define the strategy for how the latest advancements in privacy enhancing technologies can deliver scalable solutions for enterprises in healthcare, financial services, and other industries globally.”


About TripleBlind

Combining Data and Algorithms while Preserving Privacy and Ensuring Compliance

TripleBlind has created the most complete and scalable solution for privacy enhancing computation.

The TripleBlind solution is software-only and delivered via a simple API. It solves for a broad range of use cases, with current focus on healthcare and financial services. The company is backed by Accenture, General Catalyst and The Mayo Clinic.

TripleBlind’s innovations build on well understood principles, such as federated learning and multi-party compute. Our innovations radically improve the practical use of privacy preserving technology, by adding true scalability and faster processing, with support for all data and algorithm types. TripleBlind natively supports major cloud platforms, including availability for download and purchase via cloud marketplaces. TripleBlind unlocks the intellectual property value of data, while preserving privacy and ensuring compliance with HIPAA and GDPR. 

TripleBlind compares favorably with other privacy preserving technologies, such as homomorphic encryption, synthetic data, and tokenization and has documented use cases for more than two dozen mission critical business problems.

For an overview, a live demo, or a one-hour hands-on workshop,



Madi Olivé / Valeria Carrillo
UPRAISE Marketing + Public Relations for TripleBlind

3 Key Figures in the History of Privacy-Enhancing Technology

Privacy-enhancing technology may not appear in as many headlines as blockchain or cryptocurrency technologies, but behind the scenes, privacy-enhancing technology is enabling scientific breakthroughs and unprecedented business insights.

The privacy technologies widely used today are the result of developments made over the past half-century.  Extremely gifted and innovative change makers have made groundbreaking contributions spanning this time period, from Andrew Yao developing essential principles in the early 1980s at the University of California Berkeley to Cynthia Dwork devising privacy-based research principles just a few years ago while at Microsoft Research. The developments made by a handful of key figures have been instrumental in advancing this area of technology, and we are enjoying the fruits of their labor today. While there are many people to highlight and thank for their contributions to the current state  of privacy-enhancing technologies, below we feature three figures that have played fundamental roles.


Andrew Yao

In 1982, Andrew Yao was the solo author on a paper that would lead to a game-changing privacy technology called multi-party computation. In addition to developing the theoretical concept, Yao also developed several fundamental multi-party computing algorithms, on which the majority of today’s protocols are built.

In the seminal paper he presented at the 23rd Annual Symposium on Foundations of Computer Science, Yao used a simple riddle to introduce the problem he hoped to solve: Two secretive millionaires having lunch decide the richer person should pay the bill, but how can they do this if neither one wants to reveal what they are worth?

The solution to this riddle — Yao determined — is a two-party protocol that can determine the Boolean result of private input 1 ≤ private input 2. Called the Garbled Circuits Protocol, Yao’s approach involves Boolean gate truth table that is ‘garbled’; obfuscated using randoms strings or labels. This truth table is sent from the first party to the second party, who evaluates the garbled gate using a symmetric encryption key to produce a Boolean result.

An extension of this protocol to include more than two parties, multi-party computation is a system that allows multiple parties to compute a shared function using individual private inputs.

The practical application of this protocol was difficult to achieve until the 2000s, when more sophisticated algorithms, fast networks, and more powerful and accessible computing made it practical to develop multi-party computing systems. Yao’s work became even more relevant during the rise of Big Data and machine learning.

By enabling the privacy-preserving use of large datasets, multi-party computing has become a valuable tool in the inferencing phase of machine learning algorithms. During this phase, multiple parties want to collaborate on a model through the use of a combined data set that does not expose the raw inputs of individual participants.

Thanks to the pioneering work of Andrew Yao, machine learning systems have more access to a wider variety of sensitive data, and this enables the development of critical new breakthroughs and insights, in fields such as precision medicine and diagnostic imaging.


Cynthia Dwork

Cynthia Dwork is a theoretical computer scientist at Harvard University specializing in cryptography, distributed computing, and privacy technologies with more than 100 academic papers and two dozen patents to her name.

In 2006, Dwork was the main author and contributor to a groundbreaking paper presented at The Third Theory of Cryptography conference that established principles for a new kind of privacy-enhancing methodology: differential privacy. Dwork has said conversations with philosopher Helen Nissenbaum inspired her to focus on ways to maintain privacy in the digital age.

Differential privacy describes a group of mathematical methods that lets researchers compute on large datasets containing personal information, including medical and financial information, while maintaining the privacy of individual contributors to the dataset. These methods support privacy by adding small amounts of statistical noise to either raw data or the output of computations on raw data.

Differential privacy methods are designed to ensure that the added noise doesn’t significantly dilute the value of data analysis. At the same time, these methods maintain the integrity of analysis, whether or not a given individual opts in or out of the dataset. Thus, differential privacy blocks the release of individuals’ personal information resulting from data analysis. . This groundbreaking approach in privacy-enhancing technology addresses many of the limitations associated with previous approaches.

In 2015, Dwork was the main author of another key paper called “The reusable holdout: Preserving validity in adaptive data analysis” that outlined how differential privacy could be used to further machine learning-based scientific research.

In scientific research, machine learning typically involves the use of a training dataset and a testing, or ‘holdout’, dataset — on which a trained machine learning system conducts an analysis. After the holdout dataset is analyzed, it is no longer seen as an independent ‘fresh’ dataset. In the 2015 paper, Dwork and her colleagues proposed using differential privacy to preserve the independence of the holdout dataset.

According to Dwork, this application of differential privacy targets a future in which new data is hard to come by. Since machine learning requires massive amounts of data, and data is a finite resource on Earth, this application enables repeated uses of the same holdout dataset.


David Chaum

Having taught graduate-level business administration at New York University and computer science at the University of California Berkeley, David Chaum laid the foundation for a number of business-focused privacy-enhancing computation techniques, including digital signatures, anonymous communications, and a trustworthy digital system for secret voting ballots.

In a groundbreaking 1983 paper, Chaum established principles for blind signatures. The digital signature system enabled non-traceable payments by allowing a payment receiver to sign for payment without knowing its origin. The same 1983 paper that established principles for blind signatures also laid out principles for digital cash — a precursor to cryptocurrency. Chaum’s paper described how people could obtain and spend digital currency in a way that could be untraceable. 

Initially, Chaum found these politically- and socially-tinged concepts to be very unpopular in academic circles. Facing resistance, Chaum decided to strike out on his own to create Digicash, a digital payments company. The Digicash system was called eCash and its currency was called CyberBucks. The system was very similar to Bitcoin, but the Digicash system was centralized, unlike Bitcoin’s decentralized network. Private sector success helped the idea of privacy-enhanced payments catch on, and Chaum would go on to present his cornerstone concept of cryptocurrencies at the first ever CERN conference in 1994 in Geneva, Switzerland.

In 1989, Chaum and his colleague would develop ‘irrefutable signatures’ — an interactive signature system that allows the signer to control who is able to verify the signature. In 1991, Chaum and another colleague developed a system for “group signatures” that allowed one individual to anonymously sign for an entire group.

Over the years, Chaum has also developed a number of digital voting systems designed to preserve a secret ballot and protect the integrity of elections. One cryptographically verifiable system called Scantegrity was used by Takoma Park, Wash. for an election in November 2009 — the first time such a system was used in a public election.

While Chaum was able to develop an impressive array of privacy-enhancing techniques, he’s probably best known for devising the core principles behind something that gets a lot more headlines: blockchain technology.


We’re taking the next step in privacy-enhancing technology

The TripleBlind Solution expands on the data privacy-enhancing technologies developed by the pioneers in our industry.

Our technology allows easy access to the foundational multi-party computing approach established by Yao, as well as other privacy-enhancing technologies, in a seamless package. By leveraging our solution, researchers, financial institutions, and other organizations are able to focus on innovative collaborations while maintaining possession of their own proprietary assets.

Our solution also meets the highest privacy standards. In the same way differential privacy protects individuals, our privacy-enhancing software allows data owners to operationalize sensitive data while protecting the privacy of individuals.

If you would like to learn more about the latest in data privacy technology and tools, please contact us today.

The 5 Most Expensive Types of Data Breaches

Plenty of challenges can make an enterprises’ pockets hurt, but few can break the bank and tarnish brand image like a major data breach. IBM’s 2021 Security Analysis found that the average total cost of a data breach increased by nearly 10%, ballooning from $3.86 million to $4.24 million in the past year alone. Cybercrime costs are projected to reach $10.5 trillion USD annually by 2025, highlighting a growing need for privacy-enhancing and security-enforcing solutions for data-intensive sectors.

In this article, you’ll learn about the five most expensive types of data breaches –– and how privacy-enhancing computation can better protect your enterprise while you leverage data to catalyze innovation. Estimated figures are the average total cost and frequency of data breaches by initial attack vector, as cited in IBM’s 2021 Cost of a Data Breach Report.

 #5: Vulnerabilities in Third-Party Software – $4.33 million

Supply chain, vendor-supplied, or outsourced software can solve business problems without requiring in-house development, management, or maintenance. Although third parties may be able to improve key business processes, they aren’t under your company’s direct jurisdiction, limiting your access to critical information regarding their security policies or risk management practices.

Third-party software might leave vulnerabilities that can be exploited by hackers or malicious programs, increasing the risk that your organization fronts the cost in the event of a data breach. In 2020, IBM and The Ponemon Institute also found that “data breaches caused by a third party, extensive cloud migration, and IoT/OT environments were also associated with higher data breach costs.” If sharing data is essential for a business partnership, security must be factored into any mutual data management strategies –– otherwise, your organization might face magnified consequences for tacitly compromising shared data.

#4: Social Engineering Criminal Attacks – $4.47 million

In information security, social engineering is the psychological manipulation of people into performing actions or divulging confidential information. Four popular attacks that social engineers use to target their victims include:

  • Pretexting – Similar to phishing, pretexing is a method to convince victims to divulge sensitive information. Pretexing is often used to gain access to client data from banks, credit card companies, utility companies, and transportation companies.
  • Baiting – Using this tactic, cybercriminals use false promises to pique a victim’s greed or curiosity. One common method of baiting is leaving a malware-infected flashdrive in an obvious location for a potential victim to find and plug in to their computer.
  • Quid pro quo – This tactic involves a hacker requesting the exchange of critical data for a service, such as by impersonating your telephone service provider or banking institution.
  • Tailgating – This is a physical social engineering attack where someone seeks entry into a password-protected or restricted area, most often to steal critical information or hardware from an organization.

#3: Insider Threats – $4.61 million

Insider threats are the primary cause for over 60% of data breaches. Insiders are individuals with legitimate access to company assets or information who cause security harms to a business, whether intentionally or unintentionally. Traditional security measures implemented by organizations tend to focus on external threats, so it can be challenging to identify or even mitigate threats posed from within the organization. Types of insider threats include:

  • Malicious insiders – These individuals intentionally and maliciously abuse credentials to steal information for personal, financial, or criminal incentives. A disgruntled former employee who sells information to a competitor or sabotages internal infrastructure is an example of a malicious insider.
  • Careless insiders – Individuals who unknowingly expose an organization to outside threats are considered careless insiders. Examples of this threat include leaving a device exposed, falling victim to a scam, or clicking on a link that infects a computer or system with malware.

#2: Phishing – $4.65 million

What’s the most popular activity for social engineers who love cyber attacks? Going “phishing.” This method is so cost-intensive for organizations facing a data breach, it deserves its own category. These are the most common types of phishing attacks:

  • Deceptive phishing – With this method, cybercriminals send large-batch emails and impersonate a legitimate company. These phishing scams frequently use threats or a sense of urgency to scare users into divulging personal information, such as login or credit card details.
  • Spear phishing – This targeted phishing approach involves attacking a specific individual organization. “Spear phishers” will personalize emails using details relevant only to the targeted party, leading the recipient to believe they have a connection or obligation to the sender.
  • Whaling – Instead of targeting any employee or an entire organization, cyber attackers using this method focus specifically on C-suite members of a company. By researching executives and fostering seemingly-legitimate relationships, attackers gain access to even more sensitive and valuable information.

#1: Business Email Compromise (BEC) – $5.01 million

Business Email Compromise, also known as BEC, exploits email systems by targeting lower-level employees at an organization who possess administrative rights. By pretending to be an employee in another department or a C-suite executive, attackers are able to request specific and sensitive information about a company or its clients. Criminals who execute BEC scams might:

  • Spoof email accounts or websites by using slight variations on legitimate email addresses, such as instead of Did you see spot the difference? 
  • Send spear phishing emails that appear to be from a trusted sender in an attempt to access company accounts, calendars, or sensitive data.
  • Use malware to infiltrate company networks and gain undetected access to data sets, including passwords and financial account information.

How can you protect your organization and unlock the intellectual property value of your data?

Data is likely the most valuable asset to your organization. From network files with critical client information to private information gathered from years of groundbreaking research, every byte of data is foundational to business growth and operations. If you and cyber attackers both know this, what are actionable steps you can take to protect your data and unlock its intellectual property value?

  1. Prioritize security at every level of your company’s operations
    The first step to managing confidential information is discussing and implementing key security measures. You can reduce the risk of a data breach by making conscious decisions about what information is collected, where it’s stored, how long you’ll keep it, and who else can access it. This includes providing comprehensive and updated training for employees at all levels and in all departments of your organization, even if they never directly interact with sensitive data.
  1. Consider implementing zero-trust architecture for all network activity
    This security framework requires that all users need to be authorized, authenticated, and continuously validated before gaining access to sensitive data. In previous cases of data breaches, companies that implemented zero-trust architecture paid an average of $1.76 million less than those without zero-trust strategies.
  1. Test for common vulnerabilities to guard against attacks
    From using brute force methods to hack for passwords to strategically bypassing authentication screens, cyberattackers have tech-savvy tools up their sleeves. Improve security by beating hackers to the punch and conducting a robust vulnerability assessment.
  1. Use data collaboration tools without transmitting raw data
    If you’re looking to collaborate with another organization around sensitive information, privacy and security risks are likely top of mind. When considering what technical standards to follow, experts have developed comprehensive solutions that can apply to a variety of use cases for your organization. Privacy-enhancing technologies such as homomorphic encryption, differential privacy, and federated learning are all tools that can accelerate responsible innovation –– without compromising security.

TripleBlind enables organizations to pursue ambitious data projects and prevent the risks of a costly data breach. By building on well-understood principles such as federated learning, secure multi-party computation, and more, we radically improve the practical use of privacy-enhancing technologies. Unlike most third-party solutions, TripleBlind’s software is fully containerized on the end users’ infrastructure, minimizing the attack surface for most threats. With over two dozen documented use cases for mission-critical business problems, we’re ready to help you scale up data usage –– instead of cutting it back. 

Check out our whitepaper or schedule a demo with us. We’d love to explore how privacy-enhancing computation can help your business!

Trusted Execution Environments for Data Safety

Need to schedule a doctor’s appointment? There’s an app for that. Want to pay your rent from a mobile device? There’s an app for that. Interested in adding jokes to your blogs about data security? Honestly, there’s probably an app for that too. The digital age gifts a seemingly-infinite number of software solutions for everyday challenges, but not without a cost –– the more software a device has, the more vectors it has for cyberattacks.To provide a secure refuge from software-focused cyberattacks, Pro Security from AMD, Secure Enclave processors from Apple, and Software Guard Extensions from Intel create what is called a Trusted Execution Environment (TEE).

Considered to be a vital part of security architecture in many devices — a Trusted Execution Environment (TEE) limits access to allow for the highly-trusted execution of code, keeping threats safely outside the environment.

Existing first as individual proprietary solutions, TEE implementations took on a standards-based approach starting in the mid-2000s. In 2004, a partnership between Trusted Logic and Texas Instruments produced a generic TEE. In 2006, Arm launched a TEE implementation called TrustZone that used Trusted Logic software. That same year, the Open Mobile Terminal Platform released the first recognized set of standards for TEE implementation. In 2012, GlobalPlatform and the Trusted Computer Group (TCG) founded a joint working group focusing on TEE specifications and use.

A TEE creates a separate execution environment that operates in parallel to a typical operating system like Windows or Android. Devices that use a TEE allow untrusted applications to operate in an unsecured Rich Execution Environment (REE) and trusted applications to operate in a highly secured TEE. These separate environments protect both sensitive data and code from software attacks without major performance costs to the device. 

A TEE creates a trusted environment by requiring that an internal operating system, any assets, and any code be passed through a security system developed by the designers. This typically means everything in a TEE is signature checked, isolated, or unassailable. A TEE will only allow code that has been authorized, with authorization verified after a secure ROM boot, which checks the integrity and authenticity of the operating system.

Although a TEE is kept isolated, it is designed to perform in a normal environment. For example, an application running in a TEE has complete access to the main processor and memory. However, code being executed in a TEE cannot be seen or altered. Thus, a would-be attacker can be relegated to performing full-privilege malicious actions in the unsecured REE.

Overall, the inclusion of a TEE allows for greater security, a richer operating system with greater functionality, and more secure components. Any code outside of the TEE — including the operating system — cannot compromise the integrity and confidentiality of operations within the environment. A TEE also prevents hardware-based attacks by being physically separated from the rest of the system. This means a cloud service provider can be kept out of the established trusted environment.


Trusted Execution Environment support from major players

Because it unlocks options for manufacturers, software makers, service providers, and consumers — TEEs lend themselves to a wide array of devices and IT sectors. The result has been a lot of major tech companies developing trusted solutions.

Implementation requires hardware support and there are several options available in modern processors. TrustZone technology from Arm uses a system-wide approach that features hardware-enforced isolation on the CPU. AMD’s Pro Security platform is a subsystem built into the company’s processors. Hardware support from Intel involves a collection of security instructions called Software Guard Extensions being built into some of the company’s CPUs. Apple’s approach is to use a dedicated Secure Enclave Processor to handle security keys and biometric data.

Major companies have also developed different TEE implementations. One of the biggest is Google’s Trusty, which implements a TEE for the Android operating system. Compatible with TrustZone and an open-source project, Trusty is an isolated operating system that runs in parallel and on the same processor as the Android OS.

Samsung, Qualcomm, Huawei, and others have also developed commercial implementations. These implementations must meet the standards set by an organization called GlobalPlatform.


Not a perfect security solution

Although a TEE is designed to be a robust security measure, it requires full faith in trusted applications. Not all trusted applications are without vulnerabilities, which could leave devices open to attacks. For example, vulnerabilities have been identified in TrustZone and a popular TEE used on many Samsung devices called Kinibi.

One identified vulnerability is related to the fact that some TEE systems have trusted and untrusted code running on the same hardware, creating an opening for micro-architectural attacks. Another vulnerability is related to the idea that trusted code cannot, in fact, always be trusted.


Shoring up your data privacy with TripleBlind

Simply put, TEEs are another layer of security with vulnerable attack surfaces — and not a silver bullet solution. TEEs make cyberattacks more difficult, but they are hardly 100 percent secure.

As noted above, vulnerabilities have been identified in some popular TEE implementations. When a TEE is based in hardware, patching vulnerabilities can take a lot longer than patching software-based vulnerabilities. In some situations, it may not be possible to patch the hardware.

In addition to being a superior alternative to TEEs with respect to security, the software-based TripleBlind Solution is capable of addressing other shortcomings related to data privacy. 

First, TEEs do not enable digital rights on sensitive data. Data shared with a collaborator for processing could still be accessed and possibly used for unauthorized purposes. Second, TEEs do not address compliance issues related to laws like General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the U.S.

The TripleBlind Solution allows users to retain possession of their data, eliminating issues related to data residency and digital rights management. Because data remains in place, the TripleBlind Solution also simplifies compliance with privacy regulations.In addition, the TripleBlind Solution compares favorably with other privacy-enhancing technologies like homomorphic encryption and tokenization.

Perhaps most importantly, our innovative technology can significantly increase the value of data collaborations. Our technology unlocks critical business insights and medical discoveries by allowing for the increased collective computation of sensitive data. Backed by organizations like Accenture and The Mayo Clinic,  The TripleBlind Solution is the most complete and scalable approach to sensitive data operations –– without violating the GDPR, PDPA, or HIPAA. It allows all participants to retain possession of sensitive data and algorithms, simplifying issues related to digital rights management and compliance.

If your company is looking to get more out of its sensitive data operations, contact us today to learn more.

Unlocking Research with HIPAA Compliant Data Encryption

The Health Insurance Portability and Accountability Act (HIPAA) plays an essential role in protecting patients. When you’re following HIPAA-compliant data encryption standards, however, it becomes difficult to get the most out of your data. There are strict rules around how data can be used (or who can use it), and making a data set usable often means stripping away its most useful components. 

In most industries today, Big Data is redrawing the limits of human knowledge and capability. Unfortunately, highly regulated industries like healthcare have a harder time maximizing these benefits. While HIPAA is paramount to safeguarding patient privacy, regulations prevent researchers from exploring the full potential of their patient data.

A single hospital’s internal data might be enough to draw conclusions about common diagnoses, but meta studies have found this approach to building datasets for research can result in too small (and too biased) a sample size to provide reliable conclusions. Larger data sets are necessary, but researchers within healthcare organizations don’t always know the options available to them.

Embracing the spirit of the growing legal requirements for individual privacy, new privacy enhancing technologies are fundamentally changing the way healthcare organizations can unlock patient data, especially for collaboration.

But how might these solutions be better than current practices? To start, let’s take a quick look at some issues with the current ways healthcare organizations handle data.


The Limitations to Current HIPAA-Compliant Data Use Practices

Using Institutional Review Boards (IRBs) for decrypted data use: slow, costly, constrained

Institutional Review Boards (IRBs) offer a way for organizations to collectively use data, but this has multiple issues. 

Firstly, the level of bureaucracy in an IRB isn’t conducive to novel research. Taking representatives from each organization, deciding who’s getting what data, what they can do with the data (and why), and dealing with all the compliance and checkpoints along the way — all this red tape makes research slow, limited, and expensive.

Additionally, since setting up an IRB involves legal review (which is expensive in both dollars and time), the scope of research has to be carefully understood beforehand. If you wish to dive deeper into any novel findings you uncover, this can require an entirely new legal review and IRB.  Thus the process inhibits the effectiveness and potential of research by discouraging researchers from doing what they are supposed to do.

Even after all this, you’re still responsible for the data you’ve allowed other organizations to access, so you still have to trust that other IRB participants won’t make human mistakes when handling data you are responsible for protecting.


Deidentified Data: A False Sense of Security

While you can always deidentify your patient data before taking part in collective research, even certified deidentification standards can’t fully free you from concern.

It might be tempting to think deidentified data is anonymized, but being “deidentified” is very different from being “unidentifiable.” Researchers have been demonstrating for years that they can reidentify data by pairing it with other data sources, which wouldn’t be possible if it were truly anonymized.

Similarly, artificial intelligence models have gotten so sophisticated that they can identify this kind of data with ease, so solely using deidentification is akin to setting your password as “password.”


Ignoring Data That Can’t Be De-identified: Large Opportunity Costs

In many cases, you can’t simply strip off identifying data without rendering it useless for research. Say you’re studying the human eye — eye veins are as unique as fingerprints, so you can’t simply distort the data, at least not without making your research useless. Similarly, genetic data and electrocardiograms are so unique to each person that they could always be used to identify the individual in question.


A Better Solution: One-Way Encryption for Safe Collaboration and Data Use

Normally, using encrypted data means  the user of the data needs to decrypt it first, but decryption is what introduces the risks (and incomplete solutions) mentioned above. So what if you never had to decrypt data, but you could still get full usage of it?

The TripleBlind Solution allows data users to perform the same operations on data as they normally would, without having to “see”, copy, or store any data. This involves using one-way encryption, which is like locking up the data and throwing away the key: mathematically impossible to reverse. Due to the way these operations are carried out on one-way encrypted data, our solution allows data owners full Digital Rights Management (DRM) over how their data is used on a granular, per-use level.

Since any AI or analytic code can be run on this one-way encrypted data, the output is identical to running code on raw data, without putting privacy at risk. This is possible because of the innovations by TripleBlind on best-in-class, privacy-enhancing computation techniques.

Our aim with this technology is to provide tools for organizations to stop wasting valuable time worrying about security or compliance issues around research, freeing you to pursue more creative or ambitious investigations.

Since our solution ensures the safe handling of sensitive data, researchers can use data much more freely. This means you can start analyzing unconventional data points like credit card statements or driving patterns, rather than just MRIs and blood tests.

This adds a new wealth of data into diagnostics, enabling research that could vastly improve quality and effectiveness of patient care, all while maintaining their anonymity. Even though it’s sensitive data, it remains private.


Blind to Data, Blind to Processing, and Blind to the Result

TripleBlind allows your data to remain behind your firewall while it is made discoverable and computable by third parties for analysis and ML training.

These innovations build on well-understood principles, such as federated learning and multiparty compute. Our solution unlocks the intellectual property value of data, while preserving privacy and ensuring compliance with HIPAA and GDPR and all data localization laws. Data owners never sacrifice control over sensitive assets.

Want to see how it works? Learn more about our technology.