Webinar: The Present and Future of Privacy in Healthcare

On the heels of the 2022 HIMSS Global Health Conference and Exhibition, TripleBlind is proud to have our SVP of Healthcare, Dr. Suraj Kapa, MD, discuss how healthcare institutions can collaborate around data without compromising privacy, speed, or fidelity of the data.

Dr. Suraj Kapa, MD, is a cardiac electrophysiologist at Mayo Clinic. Over the course of his career he has published more than 200 peer-reviewed articles and book chapters, given 100s of invited lectures, and has filed over 30 patents resulting in startups. He is highly sought for his views on the future of digital health, and healthcare delivery.

In this webinar, we will discuss:

  • Privacy-enhanced computation between organizations can facilitate rapid innovation in healthcare, especially by enabling AI development using high quality data from around the globe.
  • Current barriers prevent healthcare institutions from unlocking data in safe and compliant ways.
  • HIPAA regulations, as well as legal, compliance, third party contracts and audits, de-identification tasks, and residency rules all create unique challenges when using data.

Begin Transcript:

Chris Barnett (00:00:04):

All right, folks. Thank you for joining. We have got a number of attendees coming in here, right at the top of the hour. So we’re just going to allow one minute to get everybody on board and then we’re going to kick off. So appreciate you waiting just a second for the other folks that are attending, and then we’re going to go in just a minute. We’ll be right with you. Thank you. All right. There’s still a few more people coming in, but we’ve got a good audience ready to go. So we’re going to go ahead and get started. So welcome everybody to a TripleBlind webinar and discussion. We’re hoping to get plenty of questions from the audience. There should be a Q and A button that you see there. So I’m Chris Barnett, and I’m just going to introduce to our panelists here and let them go for it.

Chris Barnett (00:01:30):

So we’re obviously here today to talk about the present and the future of privacy in healthcare. So our two panelists will be, first of all, Dr. Suraj Kapa. He’s a cardiac electrophysiologist and over the course of his career, he’s published more than 200 peer-reviewed articles and book chapters, given hundreds of lectures just like this one, and filed more than 30 patents and he’s a very popular speaker. So we’re excited to have him here today and his role here at TripleBlind. He is our SVP of healthcare. So we appreciate your time today, Dr. Kapa, thank you. Also, our other panelist is Jay Smilyk. He is our chief revenue officer and Jay has more than two decades actually of industry experience in these categories and was most recently the chief revenue officer of Sepio Systems. And he’s got that same role chief revenue officer here at TripleBlind. Jay oversees our strategic operations, including sales and marketing, and recruitment. So if you’re interested in doing business with TripleBlind and talking more about that Jay would be a great person to connect with. So, Dr. Kapa, I’ll turn it over to you to get us started.

Dr. Suraj Kapa (00:02:35):

Thank you so much, Chris, and thank you everybody for attending this webinar today. So as Chris mentioned, I’m a cardiac electrophysiologist, but I actually have a particular passion in artificial intelligence, scalable analytics, as it relates to digital platform and development and how that impacts both the current present and the potential future for healthcare as an institution, not just within the United States, but globally. We do want to keep this as interactive as possible. So as Chris mentioned, we do have the Q and A field, and please feel free to ask any questions throughout. I’ll try to answer them as we’re going, but otherwise, we’ll make sure we get to them at the end as well. You’ll also have our contact information to get in touch with us at any time after the meeting. So as far as objectives for this conversation, the first thing I want to focus on is reviewing the advancing need for digital platforms in healthcare and what exactly we mean by that.

Dr. Suraj Kapa (00:03:34):

The next thing I would like to talk about is the data problem in advancing digital platforms. What are the actual hurdles we’re experiencing as we try to engage the digital era into the next paradigm of what healthcare can look like? And lastly, we’ll go over the need for privacy-enhancing technologies to address the growing needs for data mobility and what we mean by data mobility, data liquidity, and all those terms that people use as colloquialisms nowadays, when they talk about digital data, digital platforms, and interacting with data. So let’s take the 30,000-foot view. What is data in medicine or data in healthcare?

Dr. Suraj Kapa (00:04:20):

As a medical student, a resident, a fellow, and as a practicing cardiac electrophysiologist, the things we always think about when we think about learning in medicine is the traditional large textbooks that each weigh about 30 pounds, that we carry around and get the semblance of retrospective knowledge about what’s going on and what could be going on with the patient. But that’s not where medicine limits itself. It’s not just reading a Wikipedia page. It’s not just reading a textbook because really where the context of medicine comes into play is at the forefront of when you’re interacting with the patients. When you’re actually speaking to the patient and contextualizing that retrospective knowledge you get.

Dr. Suraj Kapa (00:05:08):

Then all of the diagnostic information that we see and that we visualize as we interact with that patient, that lets us get better insights as to what that patient is presenting with, what their history shows, what their physical examination shows. And ultimately taking it even beyond to the labs, to the scientists, to the basic scientists, to really better understand on a personalized individualized level, what might be going on with a given patient. I think it’s very important to think about this in the digital era. Bob Wald actually just put out a point that the future of medicine is pushing the limits of remote connectivity between patients and medical providers and besides high speed and low latency. He was asking what areas of highly secure HIPAA compliant technology required to get there. I think this question really consolidates what we’re going to be talking about over the course of the next 45 minutes.

Dr. Suraj Kapa (00:06:06):

When you look at this particular slide and you look at all the areas in which data is being contributed to in terms of taking the best possible care of the patient, we have to realize that this as much as it creates a huge opportunity to offer best care and best practices, it also limits scalability because, in each of these examples, you’re looking at expensive textbooks, or you’re looking at individuals directly interacting with data, but not everybody interacts with every patient. And ideally, you would want to take these insights and these understandings and deploy them globally. So that a clinician who’s only one year out of practice can get the same value of the history of how a patient’s been interacted with as somebody who’s been in the practice for over 30, 35 years. And that’s a large part of what we talk about when we talk about digital insight development and digital platform development, part of it is leveraging this extraordinary cohort of data that’s being formed as we take care of patients and putting in a way that can actually be deployed in a global arena.

Dr. Suraj Kapa (00:07:20):

The reality that we’re facing right now is that the provider-level costs are increasing. The traditional paradigm of brick-and-mortar medicine, frankly, in many ways is falling apart. The reason is these human touch-points that are required to engage patients even as their technology improves and even as our diagnostic modalities get better. Each one requires so many logistical interactions that the cost related to them are inevitably going to expand. And ultimately we need to think about how can we go beyond that? How can we actually think about this biomedical data that frankly is getting cheaper at the data acquisition level, but not at the data actualization level, because the fact is there are more and more form factors to get health level information from individuals, even at the consumer level. And the acquisition of data is actually pretty I don’t want to call it easy, but it’s much more cost-effective than it ever has been.

Dr. Suraj Kapa (00:08:27):

But the problem is how do we operationalize that data? How do we actualize how that data’s going to be used to offer care to that individual patient? And that’s a large part of where we’ve hit the headlong hurdle of digital health because the truth of the matter is for all we talk about digital that analog human almost inevitably has had to come in the middle to understand what the data’s showing, to contextualize it in the context of that individual patient. And ideally, we can take it a step further. We can take it a step beyond that, but what I’ll talk about in a little bit is the fact that there are regulations and there are compliance issues that prevent even the most caring physician to often interact with their patient’s data, or to allow their knowledge, to be democratized and scaled at a global level to reach individuals everywhere. In many cases, because of the issues with how data is moved between different institutions, between different states, between different nation-states.

Dr. Suraj Kapa (00:09:38):

Ultimately, what we’re looking at doing is bringing machine and human together as best as possible as efficiently as possible. So we can reach an improved medical intelligence, a mechanism by which we can actually ally all these clinical insights we have, digitize them, incorporate them with these evolving machine intelligent platforms, and actually create globally scalable medical health opportunities. And what’s enabling that is the extraordinary improvements in computing power that we’re seeing.

Dr. Suraj Kapa (00:10:15):

The fact is if we look back in the early 2000s even the supercomputers of then are actually only able to complete the computations of what our actual iPhones can in the 2020s. And it’s going to happen that even our supercomputers of today are going to be surpassed by our smartphones of tomorrow. That creates a huge opportunity. That creates the opportunity where we can actually leverage data. We can actually leverage insights and complex analytic functions at the point of care, at the point of interaction. We can make a more seamless interface and we can actually take what traditionally that 30, 40 year in practice, human would’ve required to interact with a patient to understand how best to take care of them and drop that into something that can happen within seconds.

Dr. Suraj Kapa (00:11:17):

But this concept of using higher-level analytic functions of doing things along the course of artificial intelligence machine learning is not a new concept. I think it’s important to think about when we talk about data privacy, when we talk about leveraging data, when we talk about deploying data, the concepts of trying to do this are not new. The fact is it can really be taken as far back as the origin of the printing press. People have always wanted to disseminate knowledge in a structured way, but not only that they wanted to disseminate knowledge in a secure way, because whether for IP reasons or whether for reasons of protecting the data itself, it’s always been of interest. And really even the most complex neural networks and machine learning approaches that we visualize and enable today really were conceptualized even as early as the 1950s when Alan Turing first coined the concepts underlying artificial intelligence.

Dr. Suraj Kapa (00:12:21):

And when the Dartmouth Conference sponsored by John McCarthy actually coined the actual phrase of Artificial Intelligence. And in that framework, it was really the late fifties that the New York Times is already reporting that there was an embryo of an electronic computer that could walk, talk, see, write, reproduce itself and be conscious of its own existence. So you have to ask yourself, why aren’t we there yet? Why almost 70 years later, are we still talking about what can we do with digital health? What can we do to deploy it and allow it to offer the benefits to patient care that we’ve always envisioned? This is really the problem we encounter. The fact is there’s several things we need to think about when we think about digital health platforms, not just as it’s enabled in the present, but as we wish to enable it in the future. The fact is the vast majority of the human population have never been a patient. They’ve hardly ever seen a doctor between the time they’re born and the time they die.

Dr. Suraj Kapa (00:13:29):

And that might be due to access issues, that might be due to the fact that they just don’t have the time or the volition to do it. In fact, within the US alone, 80% of Americans admit they delay or forego preventive care. When we think about the US Preventive Service Task Force and what they recommend, and almost a quarter don’t even have a personal physician who they can go to when they have a problem or a question. They simply wait for things to pass, but more and more people are actually seeking to enable their own interests in their own health. There are consumer-enabled platforms, again, to gather more and more complex data about patients, about their health. Medical data that traditionally was only able to be obtained through the medical care provider. But this runs headlong again into concepts of privacy. The fact is I, as somebody in my house who has a blood pressure cuff that automatically feeds to my computer along with the methods to get my own electrocardiogram.

Dr. Suraj Kapa (00:14:31):

And I can frankly even report my own heart sounds, where am I going to feel comfortable uploading that? A secure channel? How am I going to know what they’re going to do with it? And how am I going to know that my unique health situation or considerations won’t be potentially leveraged in some way that isn’t to my liking? When we look at the evolution of privacy laws, traditionally, they’re very focused on the inability to reidentify a user to track back, oh, this person has stage four cancer back to who that individual is.

Dr. Suraj Kapa (00:15:11):

But let’s take it a step further now because what the world is thinking about and realizing more and more is it’s not just don’t track the information back to the user, but increase the rights and the ability to protect how that data is being used by others. Then thus that creates limitations inherent in terms of how we are able to share data and how we’re able to operationalize that data. So people the world over can gather the insights that are of great interest to them to create the newest and the best next technologies in order to enable improved global health.

Dr. Suraj Kapa (00:15:54):

So let’s take a step back. So we talked about the importance of why we need these digital health ecosystems, why with almost every single person in the world, having a smartphone in their pocket, and everybody working more on satellite-enabled ecosystems in order to deliver information better and faster, and more effectively. We have to think about, well, we have the ecosystems, but what is the actual issue? And the truth is all research, not just in medicine, but all interactions, frankly require data of some sort interacting with an analytic function. And thus for any ecosystem, any digital health ecosystem, we need to envision the data ecosystem that fits within it. When we think about the traditional data ecosystem, as it operates today, it starts with a data owner. The data owner might either be searching for their own operationalization use case or we might be talking about the opportunity to actually offer somebody the ability to run an analytic function on that data.

Dr. Suraj Kapa (00:17:10):

But they simply can say, “Okay, yes, we agreed. You have this cool algorithm that can tell us the likelihood of my patient needing these following five tests to identify this rare disease that might be affected by.” It’s not enough for them to do that because they can’t simply send the data to that data user in a raw format. They need to abide by all the compliance and regulatory and legal requirements because ultimately it’s about protecting the patient and the primacy they have over their own information. So in order to enable that interaction, you have to engage legal departments to draft the contracts for terms of use. You need to ensure compliance reviews and often these are highly specialized legal teams that look at the existing regulations and make sure we’re avoiding violations of them. We then need to actually prep the data in the right format.

Dr. Suraj Kapa (00:18:09):

There are any number of levels of pre-processing that have to happen in order to enable the ultimate data user to interact on that data. And also that data prep needs to take into account what the compliance and legal reviews say, as far as what’s an appropriate data set to actually enable. Then we need to actually transmit the data set, which requires a level of encryption and transmission with the opportunity for the data user to have a decryption key, to decrypt that data, to use it for its intended use case because what they did in this process, in this data prep process and we’ll go into that a little bit deeper is they’ve actually tried to enable the use of that data by eliminating information that the compliance and legal teams would be concerned about getting out there. So when we think about the attendance steps at each point of this process, it all starts with the data request.

Dr. Suraj Kapa (00:19:09):

And that means understanding what data’s actually available. The fact is if I’m a cardiovascular only entity, somebody coming up to me saying, “I have this cool operation to work on ultrasound data, but I don’t do ultrasounds.” Then their working use case with me probably isn’t worthwhile to them. There needs to be a way to properly catalog and communicate what’s available. So really there’s a level of role-based permissioning, and this can be extremely hard with third-party data sets where we don’t understand what they have. We need ways to enable discoverability, especially when we talk about scalable digital health in a more efficient manner.

Dr. Suraj Kapa (00:19:57):

Then as I said earlier, the legal departments inevitably have to get involved whenever we talk about the transmission of data, because any transmission of data, even data that does not contain PHI or PII has the risk of potentially being used in ways that the data owner didn’t want it used for. For example, if I have a library of 10 million annotated electrocardiograms, and I’m working with an individual or group who say, “Hey, we’d love to use this to validate our algorithm that we just built that can potentially save a million lives.” You send it to them. Great. You’ve contributed to the common good, but you also want to make sure that data’s not used for some other reason than what the intended use was. But this concept of how do you legally offer permission surrounding that, but not only that, how do you enforce those permissions is a very interesting question that we need to think about.

Dr. Suraj Kapa (00:20:57):

Then we get into the compliance obstacle course and this is really where we extremely encounter global hurdles because the fact is the legal regulatory landscape for what it means to enable to operationalize data is rapidly evolving. We all know what HIPAA is and I’ll go over in a slide in a little bit, all the areas that HIPAA focuses on in terms of what is associated with de-identifying an individual, but HIPAA’s just the start. HIPAA just gives us guidance on what kind of data suggests the ability to re-identify the individual.

Dr. Suraj Kapa (00:21:36):

The fact is beyond HIPAA, there are laws that actually govern how data can be moved between regions, between nation-states and while GDPR, which we all probably heard about on this call actually governs how information can go from Europe to other countries based on those countries’ privacy regulations. And it can result in extraordinary fines therein the fact is there’s over a hundred data residency laws, digital sovereignty laws that have evolved over the course of the last less than a decade that actually state that data can necessarily flow beyond the boundaries of a nation, irrespective of whatever that other nation’s privacy laws are or privacy enabled regulations are or promises are.

Dr. Suraj Kapa (00:22:25):

So we have to think about the facts that for each law and regulation imposing a different set of requirements, data sharing, especially across borders becomes an obstacle course. The fact is if I, as a US-based healthcare institution want to work with my satellite clinic in Dubai, I’m going to run headlong into how do I enable the interaction of my data with their data so that I can actually offer the best possible care that is holistic and understands all that the individual patient has gone through between both regions. And as you can imagine, this add extraordinary time and cost and can actually create huge limitations in terms of the insights we derive from that data. This is a map of how laws have evolved really since around 2017 across the globe and the increasing strictness of laws that are happening globally, that really limit what kind of data we can with what that data has to take out before it can be interacted upon.

Dr. Suraj Kapa (00:23:35):

And what regulations exist to limit the flow of data between regions. When we talk about, again, things like data, residency, and digital sovereignty. When you think about specifically HIPAA and I always go back to HIPAA because it really is a codification of what it means to extract identifying information from a data set, namely to extract PHI or PII. There are really 18 identifiers that HIPAA speaks to. And to be quite honest, the first 17 while annoying to take out are possible. It’s possible to take out names. It’s possible to take out phone numbers, social security numbers. There are natural language processing approaches to actually enable extraction of that information, even from free-text data.

Dr. Suraj Kapa (00:24:28):

But the deeper you get in, the further you get along numbers, the harder it becomes, because then you start getting to aspects of zip codes, where they live, but isn’t it important to understand geographic aspects when you think about how do you enable of the best healthcare possible? Because geography matters. Because it has to do with environment, climate, and other issues that can impact individual health. We know certain diseases can be endemic to certain regions of the world. Then when we start getting into things like voiceprints and photographic images, it becomes even more complicated because there’s a growing set of literature that says you can actually extract detailed information about somebody’s health from something as simple as their voice. There’s several startups that have shown that just from the voice alone, we can predict the likelihood of you having coronary artery disease and thus the potential need to get a stress test. When do you want to enable that globally and scalably rather than only having it exist in certain institutions? Especially since it’s a “simple algorithm” that can be deployed through the cloud or through other intermediary means.

Dr. Suraj Kapa (00:25:44):

But where HIPAA gets really tricky is number 18 other characteristics that could uniquely identify an individual. The reason that in particular is tricky, is that okay if I take out all this other information, the fact of the matter is that if I have enough information, I probably can track it back to that individual. I can probably understand on some level and figure out on some level who that individual might be and that’s where it becomes tricky. Because when you talk about genetic data, in particular, we know that genetic data is a fingerprint for an individual. As there are more and more databases that exist to actually store genetic data and to enable genetic data, we know that simply put while the genetic code on itself is not necessarily going to identify the individual paired against other data sets that might be liable to data breach might cause identification of the individual and also insights into what diseases they might be at risk for.

Dr. Suraj Kapa (00:26:51):

And that could potentially be used to harm that individual or to try to bribe that individual or to frankly affect their insurability of that individual or employability of that individual. So the truth of the matter is when we get to number 18, while we can potentially extract, mask, tokenize numbers one through 17, the fact is once you send all of the other data, that’s not a specific HIPAA identifier you make that data set if it’s breached or even if it’s accessed in a usual way. Potentially it means to still identify that individual and that’s why number 18 exists within the HIPAA codification. So after we get through all of this, we’ve done the best we can to extract as much information we can to oblige compliance and legal needs. We need to prepare the data. We need to prep it in a way that allows for that de-identification for that anonymization.

Dr. Suraj Kapa (00:27:56):

And when we go through all the different legal frameworks that exist globally, that concept of anonymization versus de-identification becomes important. Because there’s some laws that state, well, the only data that can be moved is anonymized, but even what does that mean? Because frankly, to anonymize data, it kind of means you never need to be able to track it back to the individual. And how are we going to achieve that like I just explained in the context of that 18th identifier? After that, you need to encrypt and transmit that data. Now, historically, when we think about cryptography, when we think about encryption, there’s almost always a decryption key. There’s a way to reconstruct the data, to make it operational, and that ability to make it operational means that if an illicit party gains both access us to the encrypted data set and that key it can be used illicitly.

Dr. Suraj Kapa (00:28:54):

And not only that, but massive data sets are difficult to transmit in this regard. If you think about a genomic data set just for a single individual, that can be hundreds of gigabytes. And the problem is you still have a risk of liability with the existence of that encryption key, but the problem is, can we get around this? Because we always have to ask ourselves, how can we interact with data if I can’t see the data? And that’s where we start getting into where novel approaches can get us past that. At the end of it all, we need to encrypt and use the data. So the data user obtains the data, gets access to the data in some way. Now what we’re doing as a data owner is relying on trust in part based on our legal agreements and other factors that the counterparty will adhere to the reasons by which they obtain the data.

Dr. Suraj Kapa (00:29:51):

But you’re not only relying on that. You’re also relying on their data security. That somehow their data security is equivalent to yours so there might not be a data breach, and there’s really no way to really monitor how that data’s being used or to counter trusted, but curious parties that might exist within the organization, or there might be additional parties that have access to that organization’s digital infrastructure. The fact is this entire process goes smack in the face against this entire coinage of phrase that data is the new oil. Because the fact is data what we’re saying here can be copied and then leveraged. Oil can’t be copied. Once you use a barrel of oil, that oil is gone.

Dr. Suraj Kapa (00:30:41):

The fact is when you use data for a specific use case leveraging towards its use case, ideally, you don’t want to replicate its presence over and over and over again. Using that electrocardiogram example if I have agreements with 10 different algorithm providers, and I have to re-replicate my electrocardiogram dataset 10 times over there’s now 11 data sets that exist of the electrocardiograms that were generated by my institution and annotated by my institution. Then we hit a slippery slope of exactly how is that data going to be used and how can we control its usage? How can we avoid that data maybe being transmitted to yet a third party by accident or on purpose, which we can’t necessarily track? There are a wide variety of approaches called privacy enhancing computation techniques, privacy-enhancing technologies that have been looked at over the course of the last several years to help start enabling data interactions while abiding by data regulations, such as HIPAA and others.

Dr. Suraj Kapa (00:31:52):

These include things like tokenization, which masks the sensitive data. But unfortunately, when it masks the sensitive data, it takes it out of use. It essentially eliminates the ability or limits the possibility of that data being operationalized upon. So let’s use the genomic data set as an example, and you have to tokenize half of your genome that half of the genome no longer becomes usable. So you can’t do whole genomic evaluations. Synthetic data is another use case where people have been using quantitative statistical approaches off of real data to generate synthetic data that’s representative of how people should look based on how they statistically will look. So you approximate real humans, but when you approximate real humans, your algorithms approximate real life, and the truth of the matter this creates errors within the algorithms or limits the accuracy that would occur if you had access to true raw data. We talk about differential privacy where noise is added to the data set to make it difficult to tell if information in a specific person is included in that data set.

Dr. Suraj Kapa (00:33:09):

So if I want to know, oh, am I in this data set that noise prevents my understanding of it, but differential privacy has its own limitations in the same regard in terms of the impact on accuracy and in terms of the impact on how that data set can be enabled against other shared data sets. Homomorphic encryption is something that we’ve probably all heard about. Anybody who’s interested in privacy-enhancing computation is probably aware of homomorphic encryption, but the problem with homomorphic encryption is the cryptographic enable is limited because of the limited primitives on which it’s based on. And because of that, it’s computationally inefficient at scale. It’s great if you have just a very small data set or a very small genome that you’re working with, but once you start talking about hundreds, thousands, millions of data points, it falls apart in terms of its operational efficiency. Secure enclaves and confidential computing are other ways that people have looked at enabling the interaction upon data.

Dr. Suraj Kapa (00:34:13):

The problem with both is that the hardware dependencies exist. These trust execution environments number one, depend on the aggregation of data, which limits the ability of the data owner to retain control over how their data’s used. But also in the case of confidential computing actually has extraordinary hardware requirements that increase cost. Then we talk about federated learning and federated learning says, “Okay, we’ll build models at each individual location to then build a parent model.” But the problem is federated learning has been shown to limit accuracy to impose increased computational needs on the individual data owners. They have to have their own equipment, their own specialized approaches, their own knowledge base of how to operationalize it. And it’s known that whether you build a neural network or a federated network, you can often reconstruct portions of the training data, but data ecosystems have the potential to become easier.

Dr. Suraj Kapa (00:35:14):

It’s not all lost and ultimately simplifying the agreements, managing what can and can’t be done through set permissions and has set up once and enable approach, which is a software nonhardware dependent solution that has an agnosticism towards what type of on cloud or on-prem interface you’re using will ultimately be the way we actually enable scalable digital health. In other words, reaching the promise we were talking about in the very beginning of this conversation, and ultimately many of the features we’re going to want is minimizing the IT costs, limiting the amount of extra hardware people need to get, having an audit trail.

Dr. Suraj Kapa (00:35:55):

So people know how their data’s used and digital rights associated with that about, okay, my data will only be used for this particular event for this particular use. Enabling de-identification automatically and one-way encryption, in other words, encryption, without a decryption key that nevertheless allows that data to be operationalized. So when that data exists in an encrypted way, in the ether, in any particular virtual climate, there’s no way to reconstruct it, to reconstruct not just the HIPPA identifiers, but frankly, all of the data or any of the data. So you can’t even get to number 18 because you can only get out of that information what the digital rights said, you could, what the query response will be, what the mean age of people developing stage four cancer actually is. Ultimately making it real-time, taking out the months-long process of de-identification that has to happen.

Dr. Suraj Kapa (00:36:59):

When we do that, we’ll be able to enable data in an ecosystem much more efficiently to reach multiple needs. The fact that we want to collect, harmonize and curate data across multiple different data owners. And that’s extraordinarily important when we talk about limiting bias. Because the fact is if I sit as a person accessing one data set in one location in one hospital, the truth of the matter is all of the insights I get might be highly applicable to that population representative in that hospital which I sit. But I cannot be so bold and so proud as to imagine that the insights I obtain will necessarily be applicable to an extraordinarily different population in a different area of the world with different socioeconomic, different environmental, and frankly, different endemic health issues. Discovering new AI approaches, not just deploying what we already have, but more scalably allowing the opportunity for these digital enablers to understand trends, to understand, frankly, we’re just slowly, hopefully getting out of a pandemic.

Dr. Suraj Kapa (00:38:14):

Imagine if our ability to rapidly understand in real-time was more efficient. We wouldn’t be reactive. We can be proactive but in the midst of all this, we need to be able to deliver effectively and scalably. As I said earlier, nearly every single one of us probably has a smartphone right now sitting on our desk, sitting on our pocket, or sitting on our jacket. The truth of the matter is these smartphones, these computers that we have on our desktops and in our laps have the computing capability of supercomputers just from about 20 years ago. They can enable all of these algorithms to be enacted on data from cloud-based environments. But how do we do that while retaining the IP of the algorithms and retaining the privacy of the safety of the data, where it sits? Then finally validation of these algorithms, which need to be done in order to ensure that they’re acting in the way we need them to act.

Dr. Suraj Kapa (00:39:11):

Ultimately there’s six key considerations for privacy technology. We need to ensure speed and accuracy so that we can high interoperability. We make sure there’s real-time de-identified computation. We can’t have somebody sitting there once a month, wait, getting the data to de-identified, and then eventually after that manual process of verification of de-identification, you then ship the data on for post-evaluation. Because then you’re not getting real-time understanding of nuances of data. You’re getting it after weeks or months of a process that ensures the de-identification. If you have an approach of one-way encryption, that allows for the less the data to be enabled, then you are allowing that ability to compute upon it without risking any identifiers leaking out. You have to limit issues of compatibility. We have to work across each other’s clouds. It needs to be compliant with all existing data regulations while limiting data movement, because these are really overlapping, especially in the current world of data regulations.

Dr. Suraj Kapa (00:40:14):

And ultimately it needs to be hardware agnostic. We can’t expect every single healthcare center, every single data owner to buy thousands to hundreds of thousands of dollars worth of specialized hardware. Because that initial point I put out there about global health. The fact that the vast majority of humans never see a doctor between the time they’re born and the time they die limits the opportunities we have with scalable digital health if we limit digital health to the people who can afford the hardware. And this is where TripleBlind solution comes in. We’ll talk about this more around the Q and A but TripleBlind solution is an API-driven virtual exchange. It removes the risk, effort, and cost while not restricting the utility because it exists behind the firewalls of both organizations of the data owner and the data user.

Dr. Suraj Kapa (00:41:06):

It actually encrypts both. It breaks down the logic circuits and one way encrypts the data so that they can interact with one another in an encrypted way that can never be reconstructed with only the query output that was declared okay by the data owner, being what results from that interaction. So as I sit there as the data user and say, well, I want to run an algorithm that predicts what my risk of dying in the next week is. The data owner will only allow for that operation to occur. But the next time that data owner says, “Huh, I just access their data. Let me just sneak this little one in to figure out how many millionaires are on their data set by pairing it with this other set of data.” That’s not going to happen. We can control how. Why and when the data is being operationalized.

Dr. Suraj Kapa (00:42:02):

But as a data user, I can also control how, why, and when my algorithm is being deployed while protecting the IP behind it. There’s a wide variety of ways that we enable this and which we can go into in more detail. But it’s a toolset that incorporates a lot of both well-known concepts that we’ve talked about earlier, but also several proprietary approaches within cryptography and other approaches that actually scale the ability to create a one-way encryption approach and nevertheless enables that encrypted data to result in a data response that’s similarly accurate to what we normally encounter in everyday data analytics. But to kind of end my portion of this talk before we get into Q and A, my key takeaways are that the digitization of medical data, ultimately our goal is to enable scalable healthcare, but it can’t come at the expense of privacy.

Dr. Suraj Kapa (00:43:07):

The truth of the matter is that we still need to respect the right of the individual to not only be forgotten but to also know that nobody else is getting insights about to them as an individual. I think all of us will say that we want to understand how to increase our age by 20 years from where we are right now, or to make us more healthy so we’re still running when we’re 90. But the truth of the matter is we want that at scale, we want that at a global level of understanding. We don’t necessarily want that to say, “Oh, Suraj is this…,” to the world, right? The next thing is multiple novel technologies are being introduced to the market to address data privacy concerns in healthcare. But when we think about what type of technology we want to use, and we want to deploy, we need to think about not just the cool buzzwords that are in the newspaper right now or the next cool paper that showed up in Nature Machine Learning.

Dr. Suraj Kapa (00:44:13):

We need to think about the problem first, rather than the solution first. What is the problem we’re trying to solve? And the problem we’re trying to solve is to say, “Look, data needs to stay resident, yet operationalized.” We don’t want to copy our data. We don’t want to replicate our data and we don’t want to risk our data. That’s when we start thinking about how do we achieve that and what’s the solution to that problem? Ultimately future digital healthcare data networks ideally should be non-hardware dependent, should facilitate secure and trustworthy multi-party interactions so that it can actually also enable data beyond everything I’ve talked about earlier. Everything I’ve talked about as one data owner, one data user, but we’re realizing the importance of bringing data together across multiple modalities. One area that I haven’t talked about is the importance of social media.

Dr. Suraj Kapa (00:45:09):

It might be a surprise to many people and frankly, at the level of pharma companies, many use social media to understand pharmacovigilance when there are new adverse events drugs they have on the market. If you allied that information even further with health information in a privacy-preserving way, don’t we think we can get even better insights into the adverse events of drugs and the side effects that might be associated with them. Ultimately, again, we need to ensure both individual and institutional privacy. We need to abide by varying regulations across nation-states because they’re there for a reason and we need to do it in a scalable, timely way in order to realize the problems that we all talk about when we talk about digital health, but I’ll end here and hand it over to Jay.

Jay (00:45:59):

Hi everyone. Thank you and Dr. Kapa, thank you for that presentation. I think what you’re hearing from Suraj is that we’ve all learned to work within silos, right? Governance and compliance is incredibly important for the safety, the privacy of our patients and our clients globally. And while we’ve figured out our workarounds, it’s really, to me stunted our growth in the services that we can offer out. What you’ve heard is a little bit as we would consider the greater good. If you look at data and you look at the access to data, how do we allow the potential for the possibilities? So what I mean by that is if we can keep compliant and with all regulations and compliance as they grow every single day globally if we can work within that framework, what does it mean to us to be able to have full access, not only to our data but to truly collaborate with people that we would not have thought to collaborate with before.

Jay (00:47:12):

We talked a little bit about the collaboration piece here, but the key piece here is that there is data that is weighted, that is just as important to both the provider and the receiver globally. What that means is that we can now have access to maybe data sets in smaller portions of the world, that we have not been able to include that in some of our research, some of our trials, and at the same time, give that more disenfranchised country or smaller community in another country access to data safe and securely that they would never have. And what would that mean to the greater good of the health of their patients? Us breaking out of that mentality of working in the silos, I think is one of the biggest challenges, right? What we’ve told you today was not possible a few years back. I know some of the people on the call have a lot more history on TripleBlind. I think some of the other people that might be new to TripleBlind you’re scratching your brain going well, how can you do this?

Jay (00:48:21):

We look forward to sharing a deeper dive discussion down the road. But what I would ask you to get out of this is that if we took down the silos, they’re all important. Suraj talked about different techniques that we use now, whether it be homomorphic encryption or synthetic data, they’re good techniques for certain things, but when we need to expand, when we need to start bringing into multiple databases and multi data sets to give better results, they just break down. So again, thank you for everyone’s time. I’m going to get to the Q and A. First off, let me start off. We have a question from Bob Wald, Dr. Kapa is the future of medicine is pushing the limits of remote connectivity between patients and medical providers. Besides high speed and low-speed latency what areas of a highly secured HIPAA compliant technologies are required for cable and other telecom providers into the home? Think of sensors, recorded access, VR AR collaborative communications. Thanks.

Dr. Suraj Kapa (00:49:27):

No, I appreciate that. And actually, this particular area has been a passion of mine because, in my previous life in medicine as a full-time practicing clinician, a lot of my focus was on VR AR 5G innovation and how we can utilize such technologies to better enable interaction at the point of care at the point of home care. So I think this is a major evolving area of focus. When we start talking about edge computing and we start talking about how do we enable data interactions, not just at the level of these massive brick and mortar data owners and data users, but actually going well beyond that when we think about home-based sensors. There’s a sensor for almost every aspect of human physiology.

Dr. Suraj Kapa (00:50:14):

Again, VR AR with increasing use of VR AR to actually extract biometric information about individual users. In fact, when you look at groups like Magic Leap, HTC, Oculus, they have their own healthcare departments within these things that are traditionally thought of as gaming implements. So the concept of how we enable HIPAA compliance at that level is a rapidly evolving one. At the industry level, they’re taking one of two, three texts. One is just saying, “Well, this is just a consumer-level user.” As long as they give me the permission and sign that 500-page document that you scroll through to get to the end of, to hit agree and continue on just so you can use the technology. You basically offer rights to use it as is or they say, “Look, we’re just not going to store anything that you actually have come out of the use of this.”

Dr. Suraj Kapa (00:51:16):

But then it limits what the use of it is in the future because you’re not actually gaining the insights you need to actually be able to create new technologies or new understandings of what’s happening with the individual unless you create limitations and its opportunity. Or number three, people just take some middle ground where they try to be semi HIPPA compliant but nevertheless enable the results of the use of the technology, which is not ideal either for all the reasons we talked about. So when we think about how do we think about highly secure HIPAA compliant technologies, we’re going to slowly start thinking more and more about edge computing. More and more a lot of these data are being stored in cloud environments, but we have to think about cloud interoperability because the fact is in the context of confidential compute. Confidential compute limits the ability of say an AWS to interact with an Azure cloud. Thus, you’re creating fiefdoms of how many people are on this versus that, which isn’t ideal when we start thinking about digital health opportunities. Jay, I don’t know if you want to comment further on that question.

Jay (00:52:25):

Yeah, I think it’s a great question. I think as we talked about the edge piece it is the next step and it’s obviously its things that we’re looking at as we talk about some of the wearable devices, as we talked about multiple data sets. If I could take the data from your wearable device if I take somebody with diabetes and I can then pull in their health records from their hospital, their records from their local doctor, their pharmacy purchases. Then I look at maybe their credit card statements. This is where I talk about multiple data sets, right? That able to still keep in complete privacy, but pulling in multiple data sets to get a better picture of really what is the patient’s lifestyle? How can we actually give them health assurance and keep them from getting sicker, but supply them with data beforehand by taking these multiple data sets? So, great question. I’m going to go just to next one is, what does the future platform look like? What do you think the actual platform looks like Dr. Kapa as far as on a healthcare side?

Dr. Suraj Kapa (00:53:30):

Yeah. So when I think about the future platforms, what I would imagine is something, again, that’s not hardware dependent, that enables digital rights at the user level and the owner level, that’s software-based. Not necessarily software as a service, but is purely software-based in terms of not being dependent on specific hardware requirements or specific other requirements is agnostic to where the data exists, as long as there’s some connection to the internet. And nevertheless disallows the ability of that data to be reconstructed. I think the future platform needs to basically go to where things are going to be, which is that data’s going to be cloud-based. Algorithms are going and analytics are going to be enabled through the cloud, but there’s going to be increase in competitions with different cloud-based providers. And there needs to be a solution that allows for the interoperability between them.

Jay (00:54:35):

Do you see in this platform… Now if large providers feel comfortable with giving access, a secure access, to their data, that we should be able to expand our capabilities, where people that could write their algorithms that wouldn’t normally have access to this data, could bring us to cures and additional answers sooner than maybe working just within a big pharma or within a big university.

Dr. Suraj Kapa (00:55:08):

So we’re definitely seeing that. I mean, just looking at dovetailing off your question, the entire concept of what real-world data can bring to bear on an institution’s understanding of a patient’s health of a patient’s status, or frankly, even what the next best drug should be, or even going beyond that, who that drug will actually benefit frankly, is so dependent on understanding real-world data. Right now, the way it works is, or traditionally, the way it’s worked is a bunch of executives will sit in a room together and say, “Oh, this is an interesting disease to target. Oh, there’s a bunch of literature saying this particular molecule’s responsible in this disease. Oh, hey, this particular molecule will interfere with that molecule. Let’s go after this as a drug.” But maybe within that disease process within that phenotype, that particular aspect is only 20% of that overall phenotype.

Dr. Suraj Kapa (00:56:08):

So that particular drug would vastly benefit 20% of the population of people affected by that disease. But when you’re on a clinical trial, that’s just widely applicable to the entire phenotype. The 80% who it doesn’t help make the trial negative and the disease fail, and the drug fails. Thus that concept of precision medicine and that concept of better understanding the nuances of the disease population is so critical to when we start thinking about how we can aggregate external data, real-world data information from healthcare organizations, to get better insights about what patients actually are in the real world to better understand how can we create the best technologies, the best treatments, et cetera, and who do they actually benefit? So we can better inform that both in the research and development level, as well as at the deployment level.

Jay (00:57:08):

Great. That’s fantastic. One last question, be cognizant of time. Can TripleBlind create new contextual metadata for digital assets locked inside the asset for validation and comparison to other metadata creations and mining of this new data to find the right assets? This is adding a new layer for each metadata, addition to the DICOM image.

Dr. Suraj Kapa (00:57:35):

Yeah. So I’m going to try to answer that question based on what I think is being asked by Bob. So when we think about how TripleBlind approaches the data, so first off you need to think about the fact that the data user does have to have some understanding of what the data owner has. That was one of the first things I spoke about that we need to create libraries of what data is actually there. This is actually the one place where TripleBlind enables synthetic data so that the data user can get some general understanding of what the columns are labeled as or the fact that this electrocardiographic algorithm I’m going to deploy is actually being deployed on ECGs because maybe their library was actually pointing at chest x-rays. And that’s maybe I’m forcing my algorithm to spit out an output, but it’s irrelevant to the actual data on which it’s operating.

Dr. Suraj Kapa (00:58:31):

So that’s always important. Now, when we start talking about using TripleBlind to create new contextual metadata for our digital assets, as long as that metadata is existing behind the firewall, it can always be tasked onto, tagged onto, or in alliance to, or adjacent to the raw data. So that the incremental metadata can be evaluated in a privacy-preserving way in collaboration with that primary data. So there’s a way to actually mine these against each other. One way, which I don’t think is specifically applicable to maybe what you were asking, but which is kind of a simplified consideration is talking about billing codes. I consider billing codes a form of metadata, even though so much of healthcare analytics is done on billing codes. Now imagine if the metadata that exists within the data owner’s organization is compared against the metadata that exists within the payer’s experience in terms of what’s actually paid for.

Dr. Suraj Kapa (00:59:40):

And we start getting better analytics in that paired interface without having to send the data to one another. And if I understood the question right this metadata can exist in a “lockbox” that exists behind the firewall of the data owner that will nevertheless allow it to be enabled. Or in another example, if the data owner approves creating metadata about the data owner’s assets, the data user can then use, and that’s agreed upon in a BAA or any sort of agreement or approval process that can also be enabled. But ultimately that metadata will sit behind the person who retrieves that metadata information that’s gleaned from that interaction. Jay, I don’t know if you have any other comments on that.

Jay (01:00:27):

No, I think that’s it. I want to be cognizant of time. Bob really great question. Marshall, I see that you have a question. Bob, if there’s any other details, you can please reach out. We’ll easily set up a session and Marshall we’ll get back to you. It’s a great question. I just want to be cognizant of time. We’ll get back to you on that and thank you so much. Chris, do you want to wrap this up?

Chris Barnett (01:00:52):

I do. Thank you everybody for joining and for the great questions and discussion and panelists thank you. We will send a video of this for everybody that’s onboard and also our contact information so you can follow up with anything else you’d like to talk about. Thank you everybody, and have a great Tuesday. Appreciate it.

 

Hero Image for Personal Data Protection Act

Understanding the Personal Data Protection Act

People around the world are growing increasingly concerned about the collection and use of their personal, private information, and governments have responded by enacting various data protection laws. Read our two part series on the Schrems ii decision: part one, part two.  

In Singapore, the Personal Data Protection Act of 2012 was created as a response to excessive and intrusive marketing activities. This act applies specifically to all companies that do business in Singapore but reflects the growing attention and regulation surrounding the need for enforcing the digital privacy of individuals. 

Under the PDPA, protected data is any information that could be used to identify an individual. This includes full names, passport numbers, photographs, videos, personal telephone numbers, personal email addresses, residential addresses, DNA profiles, and biometrics — such as the voice recording of an individual. It is important to note that the act does not include business contact information, such as business email addresses and business telephone numbers.

The PDPA law applies to private businesses, but not government or public agencies, to allow the agencies to conduct essential legal matters and provide social services to individuals. The PDPA also does not apply to people and organizations handling protected information in a domestic or personal capacity. For instance, the collection of names and telephone numbers for a youth softball team would not be a violation of the PDPA.

The Singapore data law is designed to establish a baseline standard for the safeguarding of personal data within the country. It complements other regulations and laws that apply to specific sectors, such as the laws concerning privacy within the banking sector. Read about TripleBlind’s recent expansion to the Asian Pacific markets.

Responsibilities Under the Law

The intent of the PDPA is to prevent the misuse of personal information. The law also acts in the best interest of organizations, as it establishes a foundation of trust for business dealings. Thus, the law recognizes the importance of both individual privacy and the need for organizations to collect and use personal data for legitimate purposes.

Organizations that fall under the PDPA have nine types of responsibilities. These include:

  • Receiving Consent. Organizations can only collect and use data from individuals who have given their explicit consent. This requires the development of policies and procedures that notify customers of data collection and request their consent. Organizations must also inform individuals of the ways that their data could be used. Individuals must also opt-in in order for their data to be collected under the law.
  • Limiting Use. When an organization collects personal data, it may only use that data for purposes which have received consent. Any additional use requires additional consent.
  • Notifying Individuals. In addition to notifying individuals when their data has been collected, organizations must also notify individuals in the event of a data breach.
  • Allowing for Access and Correction. Individuals may request a copy of their personal information and if errors are found, the organization is obliged to correct them.
  • Verifying Accuracy. If the collected data is going to be processed in a way that affects the individual or if the information will be disclosed to a third party, organizations must make reasonable efforts to make sure that the data is accurate.
  • Providing Security. Organizations must also make reasonable efforts to safeguard their collected data and protect it from unauthorized access, manipulation, theft, and use. Security should include protection from both internal and external threats.
  • Limiting Retention. Organizations are only allowed to retain data for as long as needed to meet explicit business purposes that were outlined at the time of collection.
  • Limiting Transfer. Before an organization transfers protected data outside of the country or stores data in the cloud, it must ensure that the destination meets PDPA requirements.
  • Providing Transparency. When an organization develops procedures and policies to protect PDPA information, it must make those measures publicly available on an official website.

Potential Penalties

Companies that do not meet the above obligations run the risk of receiving harsh penalties. Penalties can be imposed after a routine inspection reveals non-compliance. They can also be imposed after regulators receive a whistleblower complaint that triggers an investigation.

If non-compliance is uncovered, authorities may impose a financial penalty equivalent to 10% of annual turnover or $1 million, whichever is greater. Authorities may also direct non-compliant businesses to halt activities related to data collection, disclosure, or use. On some occasions, organizations may be instructed to delete all data related to non-compliance.

These penalties are administered after the fact. The leaking of personal data cannot be reversed. This could potentially result in the non-compliant organization being subjected to legal action.

Best Practices for Compliance

Organizations looking to meet PDPA requirements, and similar regulations for that matter, must create a strong privacy policy and make sure that policy is available for public consumption.

The policy should outline terms and conditions related to obtaining consent from customers and others from whom data will be collected. The policy should also outline ways in which people can access the data which has been collected, address any mistakes, withdraw their consent, and delete their data from the system. Finally, a policy should outline all the administrative, technical, and physical measures used to keep data secure. Measures should be put in place to ensure that data is automatically deleted after it is not in use.

How TripleBlind Can Help You Address Compliance Challenges

Whether your data partners are across town or across the globe, TripleBlind’s privacy-enhancing technology can help your company remain compliant with privacy regulations, including PDPA requirements. In fact, TripleBlind’s technology specifically addresses many of the law’s requirements:

  • We can help limit use. With our technology, organizations can restrict use of their sensitive data to purposes for which have received consent.
  • We can provide security. Our technology safeguards collected data from both internal and external threats, protecting it from unauthorized access, manipulation and theft.
  • We can limit data transfer. Our technology ensures the transfer of protected data outside of the country meets PDPA requirements.

Contact us today to see our next-generation technology in action.

Hero image for Synthetic Data In Healthcare

How Synthetic Data is Used in Healthcare

Artificial intelligence and machine learning technologies are revolutionizing healthcare research, particularly in early indication clinical trial reporting, diagnostics remote delivery, and analysis of medical imaging data.

To produce groundbreaking insights, artificial intelligence models require massive amounts of unbiased, statistically significant data. In healthcare, this can mean using patient data and the use of patient data leads to privacy concerns. Regulations like Health Insurance Portability and Accountability Act (HIPAA) prohibit the unauthorized use and disclosure of protected health information, which is any information that could be directly connected to a unique individual.

Information covered under HIPAA includes diagnostic imaging, genetic data, medical histories, Social Security numbers as well as credit card or other financial information. For instance, HIPAA prohibits the release of a cancer diagnosis to an employer without the patient’s consent.

Thus, HIPPA and other data regulations make it difficult to process and utilize patient data, especially across organizational and national boundaries — even though the use of that data could lead to groundbreaking therapies.

One solution to this situation is the use of artificially produced data that is designed to avoid any connections to real-life people, which is termed synthetic data. Even though a synthetic dataset consists of “fake data” — it is built to resemble a real dataset, so that it can be used for artificial intelligence and other applications.

In the healthcare industry, synthetic patient data can allow for sharing among healthcare providers, researchers, and private companies, such as technology companies creating AI technologies for use in the healthcare industry. But although this technology can help facilitate data collaboration in healthcare, it is not without its drawbacks.

 

Synthetic Healthcare Data is Common in the Medical Industry

One of the most common ways to create synthetic data is through the use of neural networks. Real data is fed into a system of neural networks and eventually they produce a set of synthetic data that very closely resembles the real dataset.

Importantly, the neural network system is designed to produce synthetic data that does not violate data privacy regulations. The system does this by avoiding the passage of any real-life patient data from the training data set to the synthetic data set.

Once the synthetic dataset has been created and determined to be fit for purpose, it can be used to train artificial intelligence models. Researchers can also share this synthetic dataset with significantly less concern about compliance violations.

One example of synthetic data in healthcare is a mobile app called M-sense. This app is designed to help migraine patients track their condition, gain a deeper understanding of it and reduce migraine symptoms. The app collects data from patients, and that data is used to create synthetic clinical data that migraine researchers can then use for their studies.

Clinical synthetic data has also been applied in research involving recently discovered or rare diseases. These diseases have very few patients, making data on these diseases relatively scarce. In these situations, synthetic health data can supplement real data collected by scientists. These researchers can then create control groups for these rare diseases for important clinical trials. This is a similar application to using synthetic data for machine learning, but the results are more focused on specific rare diseases.

Another benefit of synthetic data in healthcare is that it is reproducible. Reproducibility is critical when conducting experiments as part of the typical scientific method. However, reproducing patient data can be difficult or impractical, particularly where patient privacy is involved. In these situations, it is beneficial to be able to produce additional datasets.

Official government agencies have also been using synthetic data. The Office of the National Coordinator for Health Information Technology (ONC) has an open-source project focused on creating superior synthetic data that can facilitate scientific research. The project is focused on producing high-quality synthetic data related to pediatrics, opioid addiction, and other complex healthcare situations.

 

Problems with Using Synthetic Data

Synthetic data does have limitations when used in the healthcare space.

First and foremost, it isn’t as useful as real data. The quality of clinical synthetic data is highly dependent on the quality of the training data and the data synthesis system. A 2017 study on the quality of synthetic data from MIT involved two groups of data scientists conducting an analysis — a control group using real data and an experimental group using synthetic data. The study team found that the experimental group was only able to match the control group results with 70 percent accuracy, which may not be acceptable in some situations.

Another problem with synthetic clinical data is the potential to omit outliers that would otherwise appear in a real dataset. Neural networks used to generate data are inefficient at producing unusual-but-possible data points. Importantly, outliers can often be more important than typical data points.

While desirable for use cases, the passing of outliers from a “real data” training set to a synthetic dataset could translate to privacy concerns. If the training dataset of patient information holds outliers that are passed through into synthetic data by a neural network system, these distinct data points could potentially be used to identify individual patients.

Additionally, neural network systems that produce synthetic data are vulnerable to cyberattacks and these networks must base their work on real private data. If a hacker can access the data production system, they may be able to reverse engineer private data. While some synthetic data systems use extremely restricted access to prevent this kind of attack, it is impossible to completely prevent.

 

TripleBlind’s privacy-enhancing solution addresses many of the shortcomings of synthetic data

  • Quality is maintained. Our solution allows for data to be kept in its original form. This means outliers are not lost in translation.
  • Better AI/ML modeling and better analysis. In leveraging superior privacy, data partners can alleviate compliance concerns, and this opens up access to even more data than would be otherwise available.
  • Avoids unauthorized use. When a data holder uses a third party to generate synthetic data, it must turn over sensitive data to that third party and this opens the door to unauthorized use. With TripleBlind’s privacy solution, data holders never have to turn over their sensitive data.

If your company is currently considering the use of synthetic data, contact us today to find out how our next-generation approach to privacy technology compares.

Blind AI Tools

A11Y Product Award Hero Image

TripleBlind Recognized for Accessibility and Compliance at 2022 Product Awards

KANSAS CITY, MO March 21, 2022 TripleBlind announced today that it has been named the winner for A11Y & Comply: Level-up Scale & Complexity in the 2022 Product Awards. Hailed as the premier event for product managers, the Product Awards, presented by Products That Count, in partnership with Mighty Capital and Capgemini, is the only awards show designed to celebrate the tools that help product managers build great products. In recognition of the dramatic digital transformation seen in 2021-2022 by product teams, this year’s theme welcomes you into the Age of Product.

Nominees are chosen by Products That Count’s product manager network, and winners are chosen by an independent Awards Advisory Board composed of top product leaders. This year’s Board included Google Product Lead Neha Taleja, Transfix Product Lead Patrick Blute, Indeed.com Product Lead Iryna Krutenko, and product leaders Maheep Bhalla and Felipe Gasparino.

TripleBlind offers the most complete and scalable solution for privacy enhancing computation. It unlocks the estimated 43 zettabytes of sensitive data stored by enterprises today that are inaccessible due to privacy concerns, and unlocks the opportunity to solve a broad range of use cases and more than two dozen mission-critical business problems. 

TripleBlind’s novel software-only solution supports all cloud platforms and is delivered via a simple API. It’s built as a superior method to existing processes of privacy-enhancing technologies such as homomorphic encryption, synthetic data and tokenization. 

“TripleBlind has shown our community what it takes to be at the forefront of the revolution we call the Age of Product,” said SC Moatti, founder of Products That Count and the Product Awards. “This award is a testament to the innovation, focus, and transformation this team has made at the dawn of a new era in tech.”

“We are proud to have been recognized by the Product Awards for the accessibility and compliance that our solution unlocks for customers,” said Riddhiman Das, co-founder and CEO of TripleBlind. “Our solution allows intellectual property to be accessible and shared for collaboration while also ensuring that collaborations remain in compliance with agreed upon use and applicable data privacy regulations.”

 

About TripleBlind

Combining Data and Algorithms while Preserving Privacy and Enforcing Compliance

TripleBlind has created the most complete and scalable solution for privacy enhancing computation.

The TripleBlind solution is software-only and delivered via a simple API. It solves for a broad range of use cases, with current focus on healthcare and financial services. The company is backed by Accenture, General Catalyst and The Mayo Clinic.

TripleBlind’s innovations build on well understood principles, such as federated learning and multi-party compute. Our innovations radically improve the practical use of privacy preserving technologies, by adding true scalability and faster processing, with support for all data and algorithm types. We support all cloud platforms and unlock the intellectual property value of data, while preserving privacy and enforcing compliance with all known data privacy and data residency standards, such as HIPAA and GDPR. 

TripleBlind is superior to existing methods of privacy preserving technology, such as homomorphic encryption, synthetic data and tokenization and has documented use cases for more than two dozen mission critical business problems.

 

For an overview, a live demo, or a one-hour hands-on workshop, contact@tripleblind.ai.

 

About The Product Awards

The Product Awards, produced by Products That Count in partnership with Capgemini and Mighty Capital, celebrate the best products for product managers. Based on insights from thousands of product managers, the Product Awards showcase product managers’ favorite products within five distinct categories: Informed Go-to-Market Strategy, Delightful User Journey, Level Up Scale & Complexity, Responsive Product Accountability, and Empower the Whole Human. These categories were defined by our independent Awards

Advisory Board, which is composed of five of the brightest product leaders around. Each category features four relevant superpowers. Learn more at productsthatcount.com/awards

 

Contact

Madi Olivé
UPRAISE Marketing + Public Relations for TripleBlind
tripleblind@upraisepr.com
702.622.2542

Kansas City Business Journal

KC Company is on a Mission to Prove Commercializing Data Doesn’t Have to Cost Us Our Privacy

https://www.bizjournals.com/kansascity/news/2022/03/16/kc-company-commercializing-data-privacy.html

March 2022 Events Hero Image

TripleBlind Thought Leaders to Share Exclusive Insights at HIMSS22 and a Virtual Recap Webinar

KANSAS CITY, MO., Mar. 10, 2022TripleBlind, creator of the most complete and scalable solution for privacy-enhancing computation, which unlocks the intellectual property value of data, while preserving privacy and enforcing compliance with HIPAA and GDPR, will share industry insights and trends at two upcoming events.

TripleBlind will participate in the following events:

  • Unlock Private Healthcare Data, Tuesday, March 15 at 2 p.m. CT, Orange County Convention Center, W311E, Orlando, FL. 

TripleBlind’s co-founder and CEO Riddhiman Das and SVP, healthcare, Suraj Kapa, M.D., will present, “Unlock Private Healthcare Data,” during HIMSS22

This session will cover the current barriers healthcare institutions face when it comes to unlocking data. Learn how healthcare institutions can collaborate and share data without compromising privacy, speed, and integrity of the data. This will allow healthcare institutions to predict future diagnosis and reduce complexities associated with internal and external data sharing, which often includes sensitive personal identifying information.

Click here to register.

  • “The Present and Future of Privacy in Healthcare”, Tuesday, ​​March 29 at 11:00 a.m. CT, Virtual.

Dr. Suraj Kapa, MD, TripleBlind’s SVP of Healthcare, will host a webinar on insights spoken at HIMSS as well as provide some bonus content to those that were not able to attend.

In this free webinar, the below thought leaders will discuss how healthcare institutions can collaborate around data without compromising privacy, speed, or fidelity of the data and how privacy-enhanced computation between organizations can facilitate rapid innovation in healthcare. They will also discuss current barriers that prevent healthcare institutions from unlocking data in safe and compliant ways.

  • Suraj Kapa, SVP, healthcare at TripleBlind 
  • Jay Smilyk, CRO at TripleBlind

Click here to register.

 

About TripleBlind

Combining Data and Algorithms while Preserving Privacy and Enforcing Compliance

TripleBlind has created the most complete and scalable solution for privacy enhancing computation.

The TripleBlind solution is software-only and delivered via a simple API. It solves for a broad range of use cases, with current focus on healthcare and financial services. The company is backed by Accenture, General Catalyst and The Mayo Clinic.

TripleBlind’s innovations build on well understood principles, such as federated learning and multi-party compute. Our innovations radically improve the practical use of privacy preserving technologies, by adding true scalability and faster processing, with support for all data and algorithm types. We support all cloud platforms and unlock the intellectual property value of data, while preserving privacy and enforcing compliance with all known data privacy and data residency standards, such as HIPAA and GDPR. 

TripleBlind is superior to existing methods of privacy preserving technology, such as homomorphic encryption, synthetic data and tokenization and has documented use cases for more than two dozen mission critical business problems.

For an overview, a live demo, or a one-hour hands-on workshop, contact@tripleblind.ai.

 

Contact

Victoria Guimarin
UPRAISE Marketing + Public Relations for TripleBlind
tripleblind@upraisepr.com
415.397.7600

TripleBlind now available in microsoft azure marketplace

TripleBlind is Now Available in the Microsoft Azure Marketplace

We are excited to announce that TripleBlind is now available in Microsoft Azure Marketplace, an online market for buying and selling cloud solutions certified to run on Azure!

Through the Azure Marketplace, IT teams now have access to TripleBlind’s solution and can avoid some common barriers that inhibit utilization of their data: 

 

  • Legal Agreements: IT teams can utilize their existing pass through Microsoft legal agreement. Because legal teams would have already approved the existing pass, the need for supplemental reviews is eliminated.
  • Budget: Many enterprises already using Azure solutions have in place budgets for Marketplace, so no incremental budget requests are necessary to include TripleBlind’s solution.
  • Time: Although already simple to install on its own, The Azure Marketplace enables enterprises to be up and running with TripleBlind in a matter of hours versus weeks or months.

 

TripleBlind’s inclusion in the Azure Marketplace will accelerate collaborative data sharing among organizations around the world and will directly benefit industries that rely heavily on sharing intellectual property, like financial services and healthcare.

Screenshot of TripleBlind Azure Offering

TripleBlind’s software-only solution supports all cloud platforms and is delivered via a simple API. It’s built as a superior method compared to alternative privacy-enhancing technologies such as homomorphic encryption, synthetic data and tokenization. TripleBlind solves a broad range of use cases and unlocks the estimated 43 zettabytes of sensitive data stored by enterprises today that have been historically inaccessible due to privacy concerns.

 

To learn more about TripleBlind’s novel private data sharing solution or about how to get started with TripleBlind through Azure Marketplace, reach out to contact@tripleblind.com

HIMSS 2022, March 14-18, 2022, Orlando, Florida

HIMSS 2022, March 14-18, 2022, Orlando, Florida

Private data sharing between organizations can facilitate rapid innovation in healthcare, especially by enabling AI development using previously inaccessible data. During this presentation, TripleBlind CEO, Riddhiman Das and SVP, Healthcare, Suraj Kapa, MD will discuss the current barriers healthcare institutions face when it comes to unlocking data. Learn how healthcare institutions can collaborate with data without compromising privacy, speed, and integrity of the data. Meaning, data can be used while being compliant to privacy laws, rules and regulations such as HIPAA. This will allow healthcare institutions to predict future diagnosis and reduce complexities associated with internal and external data collaboration, which often includes sensitive personal identifying information.

CDO Magazine Logo

Walmart, Chief Data and Analytics Officer: Data is Driving Disruption in Retail’s Digital Transformation

https://www.cdomagazine.tech/cdo_magazine/topics/digital_transformation/walmart-chief-data-and-analytics-officer-data-is-driving-disruption-in-retails-digital-transformation/video_e055ce50-8189-5801-9e29-38595f030f18.html