Data Ethics & Policy with Sheila Colclasure

Data is infinite. Digital is inevitable.

Sheila Colclasure

This week we are talking about the efforts underway around the world to promote ethical, accountable data use, the promise and terror of AI, the need for a universal translator, and much more. Leading this conversation is Sheila Colclasure, Global Chief Data Ethics Officer and Public Policy Executive with LiveRamp.

Sheila leads efforts to ensure fair, ethical data use at one of the largest data aggregators in the world to enforce data governance best practices and privacy protections. Her focus is on a people-first accountability model to cover data as an abstraction of individuals. She is a global thought leader and frequent speaker on applied data ethics and policy to restrict harms from data use.

Additional Links for Sheila Colclasure

LinkedIn

Twitter

View full episode transcript

Welcome to the Data Science Ethics Podcast. My name is Lexy and I’m your host. This podcast is free and independent thanks to member contributions. You can help by signing up to support us at datascienceethics.com. For just $5 per month, you’ll get access to the members only podcast, Data Science Ethics in Pop Culture. At the $10 per month level, you will also be able to attend live chats and debates with Marie and I. Plus you’ll be helping us to deliver more and better content. Now on with the show.

Lexy: Welcome to the Data Science Ethics Podcast. This is Lexy Kassan.

Marie: and Marie Weber.

Lexy: and today we’re joined by a very special guest, Sheila called glacier from live ramp is their chief public policy officer. Previously their chief data ethics officer. She has decades of experience working in data privacy governance at the IX and helping to guide policy around the world for the use of data. Sheila, thank you so much for joining us and welcome.

Sheila: Thank you guys so much. I am thrilled to be on your podcast today talking about these very, very important issues. Thanks.

Lexy: Thank you. All right. So she like, give us a little bit of background. How did you start? What is it that you’re doing now? What is a chief public policy officer in this space?

Sheila: Well, um, I started, uh, over two decades ago coming out of Washington, D c first in the u s senate and then, uh, managing congressional and political affairs for the illustrious and various sexy American Institute of Certified Public Accountants. They did very material, very important work, uh, for a sector and for the United States economy. So great body of work came out of that. I came out of Washington d C enjoined action, which is a very data intensive company. About 18% of their business was, was compiling data and then using data science to extract knowledge and make it actionable and meaningful for brands. So joined them and began to build out their data governance program back in the day, called it privacy program and innovated over time. Uh, how do you use data for the benefit of people? How do you ensure that the data use does create beneficial impact for people and that people, uh, all of us included in people have the ability to participate.

Sheila: It might be opt out, it might be opt in, might be access, et cetera. So you know, with the evolution of all that, a lot of pioneering work in the policy space helping found different self-regulatory organizations participating in what was then the direct, uh, marketing associations, self-regulatory, et Cetera, not just domestically but in other markets around the world. And now at live ramp, uh, building out a digital first data governance program built on this notion of the ethical use of data. And that’s a key term and very, very important to be ethical. It suggests doing the right thing in all circumstances. And I do think that data ethics or the ethical use of data is the operative term for where we are in history and that is accelerating into the digital age. Thanks so much. You mentioned that data ethics is certainly at the forefront of your work and that you’re doing this around the globe.

Sheila: What are some of the differences that you see in data ethics around the globe and especially in how it’s being treated for a policy perspective? Great question. What I’ve learned over time from firsthand experience and of course we know this to be true ethics is contextual. Data use is contextual. How let’s say citizens in the United Kingdom feel about data use is different than how citizens in Germany feel about it. Different again than how citizens in China or Singapore or Australia and certainly in the United States. So ethics itself really is comprised, I would say of three things. Number one, legality and laws are a codification of things we’ve already decided are right or wrong. So that’s first. The second thing is this notion of is it adjust use of data and that is essentially the harms test have I detected and prevented harm. And then the third one is a fairness test.

Sheila: And fairness really goes, and this is why ethics is important, fairness is a proxy for ethics. Ethics is a proxy for fairness. As long as you unpack those other two things, the harm detection and prevention and the legality, which is things we’ve already codified, decided and codified. So this notion of ethics, highly contextualized and very, very important. Certain economies and the citizens of that participate in those economies feel more favorable about data use, advanced algorithms, data science to deliver benefits. So in those economies, uh, in those marketplaces you can do a little bit more or maybe a little bit different with, with data than in other marketplaces that have a different cultural feeling, social norm, um, uh, approach to how data is used. So that gets at this data ethics data use in context, flexible by whatever the cultural values and social norms are. And we all have to remember that that is valid. It matters. And when we operate in that space, we have to be very sensitive and respectful to those cultural norms and social values and only use data and data science within those confines.

Lexy: As a global company, how do you approach that, given that you’re simultaneously dealing with the data and cultural norms of many different cultures?

Sheila: It takes a balance. Um, and it takes a concerted effort. You have to be intentional and we are, and that is a big piece of the work that I do is engaging in these other marketplaces so that I can learn so that I can be sensitive so I can, um, explain who we are, how we operate, and then have a, a mutual trust-based exchange and bring all that learning back in side so that my company can build their capabilities around that level of sensitivity and respect. But it does take intentionality

Lexy: With that sort of intentionality, do you think that it’s likely that the public sector or the private sector will drive data ethics further or faster? Who has kind of that intention behind them at this point?

Sheila: Well, that is a, that is a complex question. Or let me say the answer is complex. Great deal of complexity. Uh, the public sector, when you, when you break down what we mean by that, um, and, and the private companies have to participate in that discussion. They have to contribute, right? And it takes both, both pieces to have a healthy, vibrant economy, to be future prepared, to think about jobs creation, to think about an accountable and accountable economy and accountable tech sector and accountable data use. You know, the backdrop of all of this is where we are at this place in time. We’ve come from, you know, an agrarian economy all over the world for the first industrial revolution. The second, the third, we’re in the fourth, which is also called the machine age or the digital era. And the fuel is data and data science.

Sheila: And this is how all of our tech will work. It has to, it creates data, it consumes data. It’s a virtuous cycle. As we bring more and more AI agents, maybe it’s just machine learning complimented, um, with some AI agents, small and then they get progressively bigger as we train them and they get smarter. It brings into focus what you, what you mentioned the, the public, uh, policy debate in the public sector and in the private sector. And right now we have a bit of a disconnect. We have the public sector and one of the components of the public sector is lawmakers, um, and companion to that regulators that ride rags. So they are charged with creating regs that keep, keep a society and economy under control and accountable. And then the private sector is charging ahead, innovating. And we’ve got to get this balance. One has to inform the other and has to maintain a future viability. So both are very much in play. Both are very focused on data policy issues. Right now the epicenter of all of that at the moment is Washington DC. The backdrop of that in the United States, it’s what’s going on in the 50 different states. A big macro view of all of that is the very important European general data protection regulation or GDPR, which has been enforced for while over a year. And that’s very important as well.

Marie: And Sheila, we’ve actually talked about GDPR on the podcast before. And one of the things that I think about when I compare GDPR as it exists in the EU versus what we currently have in the United States is especially for a company like the one that you work at that’s global in nature, there is this idea of developing towards the lowest common denominator in terms of the regulations. So basically implementing GDPR, even here in the United States, while it’s not required when you implement that across your organization, okay. That then allows you to go into the EU market much easier. So when you do think about the ethics and how it can different from region to region, do you also still take into account where you want to set your minimums at so you can have the broadest appeal?

Sheila: Yes. I love the way that you positioned that. Now, GDPR is very important, is the standard right now. Uh, and of course Europe has something called adequacy. So other countries that want to trade with Europe are in the process of considering their own national data protection laws or updating their national data protection laws. Certainly considering how their particular legal constructs around data line up with the general data protection regulation. So it’s having this outsized effect on the world. It may not be the only standard, but it is a good starting place in the u s ah, we may decide to do something a little bit different. But to your point, Marie, for those companies that have never done operational data governance before, GDPR is a very thorough and thoughtful place to start. For companies that have done operational data governance before, GDPR was a really good exercise to do a, um, an inventory and analysis in an a tuneup to bring it forward, to modernize, to really go down those 99 different articles in GDPR and, and understand how their organizations were or were not aligned.

Sheila: So to do that operationally too, ensure your systems system-wide. For a multinational company, it is a very good place to start to have one basis for operational data governance. And yes. Um, it is a door opener. It gives you some more certainty than you otherwise would have and it’s a good approach. Now it may not be the only approach. You know, in the United States we’ve come at data regulation differently. We have come at it set totally. So are more material uses of data like credit decisions. We have the uh, fair credit reporting act for other types of data like health care data. We have HIPAA and high tech and the list goes on. Um, what we don’t have yet in the United States is a national data protection or data privacy law. Now we do have the new law in California called the California Consumer Privacy Act or CCPA that many companies are getting ready for.

Sheila: And just like with the GDPR, do you engineer your programs all over the world? Do you just build to one standard GDPR? It’s a really great place to start. CCPA is different. So then you have to come and look at your, a data processing here in the United States and you have to make a decision as an enterprise. If you operate outside of California, do you build your processing and compliance with CCPA and do that everywhere in the United States or do you have a means of segmenting off your processing of California resident data? So you give CCPA treatment there and you defer your treatment everywhere else? It is an efficiency concern, a legal certainty concern and it’s one that all companies are grappling with. Right now we have seen and we will continue to see other states introduce state level legislation that our CCP a books, similars. So we may well end up with a patchwork here in the United States in advance of being able to get a national standard at the federal level that preempts the state laws. We, you know, that’s in play. We don’t know yet.

Marie: And I think that ties back nicely with the other question that we had about – does the private sector or the sector kind of push the advance of data science ethics and in the one regard as laws like GDPR in the EU. And like you said, this new regulation in California get passed there becomes a greater expectation among the public for this type of protection for this type of control over their own data. And as that expectation increases that puts more pressure on organizations to either implement that on their own or it puts more pressure on the lawmakers and the regulators to pass that type of framework. So companies do that for their citizens. So I do think as we move forward, we’ll kind of see a snowball of these two where you’re going to see more expectation from the public sector cause they, they are now aware of it. And you also see from the private sector basically responding to to that expectation.

Sheila: I agree. And I think the term I would like to use just this notion of accountable or accountability of companies that collect and use data. All types of data need to be fully accountable for the data. The way it’s collected, the way it’s used, who it’s shared with, how they use it, and how individual people are able to participate. Things like we, we see the term in GDPR, we see transparency constructs captured and CCPA is transparency. The first thing is have you made your data practices meaningfully transparent for people, not just legal, he’s but done other things. Um, so that people can understand, have you given them choices for marketing and advertising uses of data that accrue to the benefit of people that are the financial engine of free content and free access, which is really important for the Internet. Uh, you know, marketing and advertising as the financial engine. It’s the pay model and I think the right choice, there’s opt out with like really effective transparency for sensitive type data opt in, but the time has come. Uh, I think for good brands to be accountable and, and like really stand up operational data governance and not just trust washed their brand but actually ensure they’ve got operational data governance in place that delivers on those promises.

Lexy: So how do you balance sort of the internal governance? I’ll say crackdown, but the, the regulation internally of data governance and making sure that appropriate restrictions and permissions are in place for data access and data use versus a more cultural shift within the organization of ensuring that people understand why they maybe do or don’t have access to information and really enforcing the ethics that the organization has chosen to adopt.

Sheila: Data governance, especially in the digital age becomes very tricky. What I think is a really great approach is as much automation. Like you’ve heard the term privacy by design or privacy engineering or data ethics by design or applied data ethics. And I think to your point, Lexy, it’s both. You’ve got to design it into your system and you’ve got to have a team of experts that know what the issues are and can translate the policy concerns, the policy constructs. Uh, and there are layers and layers of laws in place, self-regulatory code in place, case law, contract law. There’s already, this is what I like to tell people. Every piece of data that we touch has some sort of recovery requirement and regulation on it.

Sheila: We want to have not just experts that understand and can translate that and work with the engineering teams, but it’s on a continuum and you’re on a journey to educate, uh, and to acculturate. And I think companies that acculturated around this notion of data stewardship do the best job, but you’ve got to start and you’ve got to explain the value. I’m, I’m one of those people that I believe other people are basically good and basically want to do the right thing. I think it’s hard in the world of digital to translate all that. What is the right thing into the way the tech works? And that is where the big complex nuance lift is. That’s where all the action is. We can agree what should be, but how do you take what should be and make sure that the tech works that way. And you have all of the controls, not just administrative controls but technical controls to ensure your tech does the right thing. And it’s complex and it’s a journey and it’s not one and done. It’s an ongoing, again, to get back to that word intentionality.

Marie: What that reminds me of is just iterating. And I think that also makes sense for this type of environment where the regulations will be evolving over time and customer expectations will be evolving over time. It’s going to be very iterative. You’re not going to build something and be like this will be good for the next five years. My job is done.

Sheila: Hahaha. You are so right. You know it reminds me of the conversation right now. I mean we’ve all heard the word consent, right? Consent and we’ve all like when you go online right now you see, you know a foggy curtain comes over and obscures the whole screen or you have a pop up from the bottom that obscures part of the screen or maybe a curtain down that says we use cookies, click here to consent. That’s meaningful transparency with what I would call an acknowledgement and it’s a little disruptive. It’s it. We know cookies or are happening. There may be a better approach, but what I think is really dangerous is to think collectors and users of data as long as they can get you to click. I agree. They’ve got an opt in consent and that presumes that the person understood the full range of data use.

Sheila: They were consenting to, they gave an active informed choice and then now the collector of data can use it. Whatever is packed behind that consent, you know all the notice. I don’t know that that’s the right thing. It’s this notion of consent has a role to play. Certainly we all need choices and we need some really effective transparency. But I think the bigger obligation is this notion of accountability, meaning collectors and users of data should be accountable for using data for benefit and detecting and preventing harm. And that to the point you made Marie and you made Lexy, that is not static either. We evolve over time. 20 years ago cookies were brand new and we were all scared and there was a, you know, a bunch of front page stuff and you know, there were many, many stories, all these cookie things, these cookie things and it got all very comfortable.

Sheila: Oh that’s the way that the Internet works. That’s how they remember me. When I come back I get benefit, I get value. And we got all very comfortable with that. And of course now there’s secondary uses, some that actors have used data for bad things. And so now we’re concerned again about that. So I make the point to say I agree, it is an ongoing, uh, evolving mission to make sure that the data use delivers value and is very sensitive to what we, the people feel is good and beneficial and that we have the amount of participation that is meaningful to us.

Lexy: You mentioned that, you know, obviously this is changing, that the things that scared us maybe 20 years ago about data use are at least were for some time not necessarily as scary. They’ve kind of come back to the fore maybe. What do you think is sort of the next challenge? What kind of scares you right now and what are you most excited about in our digital future?

Sheila: What, what scares me now is AI, you know, and excites me. It’s both thrilling and terrifying. You know, there’s all this great and wonderful stuff. We’re going to solve disease, maybe eradicate disease, you know, do population health. We’re going to help the underprivileged in ways we never could before. We’re going to learn stuff. We’re going to have clinical trials and be able to find the people that matters to the most. We’re going to, you know, we’re gonna create so much convenience and value and a lot of the stuff we humans worry about, little stuff is going to be taken care of by these AI agents. You know, one of my fav favorite things is smart medicine and another one of my favorite things is smart water. I’m a conservationist and I love the notion of really understanding my water consumption and AI agent that can help me and my household do better.

Sheila: There’s a dark side, right? So AI with the promise to help humanity improve and deliver on all of our human wishes for Utopia. At the dystopians side of it is things like I put smart water in my home and all of a sudden my, my children, I’ve got teenagers, they go to get their first jobs and their insurance says, well, I’ve got your smart water consumption and you didn’t do a very good job of brushing your teeth so your rates are going to be different. And that’s true and real because we know that that smart water devices are so sophisticated and nuanced that they know these things. Now we know this. So you know, it’s, I’m thrilled by all the promise and I’m terrified that we won’t get our AI ethics right, that people, it won’t be people centered and that people won’t have the ability to participate in a meaningful manner, that tech won’t be sensitive enough to what people care about.

Sheila: And I think that’s the new frontier. I think those are the big questions. There are a number of organizations and groups around the globe right now discussing AI ethics. Are you involved in some of those conversations as well? Yes. Yes. I think they’re very important conversations. And yes, I am involved in contributing. Of course there’s a great deal of learning and if data simple day for that, I’ve been working in the data field for for a couple of decades, if that was complex. AI Is a, is complex on an order of magnitude. As you will know, we have machine learning with deep neural nets and we have little AI agents and then big self-learning AI agents. So it’s both a, a wonderful exhilarating journey of knowledge for me and then understanding it well enough to, to collaborate with my policy counterparts all over the world and, and bring it back to human goodness.

Lexy: So do you see any particular areas of kind of front runners on these policies or in AI ethics. Kind of who’s getting there, who’s maybe getting it right, who’s maybe doing it wrong? Is there such a thing?

Sheila: I think there’s a number of really exciting efforts I’m excited for. There’s some body of work coming out of the OECD in Europe. That’s great. The future of privacy forum led by Joel Polonetsky is doing some great work. Uh, Marty Abrams at the Information Accountability Foundation is leading some work and leading some global dialogue. Guiliana Bellamy and Marcus Heider at the Center for information policy leadership are doing some great work. So, um, Stanford’s got an initiative and I’m plugged into a number of those conversations. And again, the most exciting work is ahead.

Lexy: Stanford’s actually a very interesting and topical one as just a couple of weeks ago, Tim Cook was there and spoke at their commencement address. The issue of accountability in data.

Sheila: Wow. Well it is a, the term of art in Washington D C is enforceable accountability because in Washington D C, um, you know, you’ve got to have good policy and then you’ve got to navigate good politics. So yeah, I agree with Pam though, the notion that that data is an abstract of a human and it deserves all of the dignity that we humans deserve. And, and you’ve got to be accountable for what you’re doing with data because it is an abstract of a person and we need to be first and foremost human centered in all of our techniques, design not just as profiteers. That has really ensuring that data and data science, it’s good for people first.

Marie: So Sheila you bring up interesting point in terms of the idea that data is an abstract of one person. And you had an example a moment ago where you’re talking about smart water. So as you were saying smart waters is so sophisticated they can tell if you are brushing your teeth or not. And from the Insurance Company’s point of view, that’s great for them to know because then they can price their insurance rates better. But if we really think of data’s having the same rights as a person that falls under, you know, your privacy and what you do in your own home. So should that really be part of somebody’s data set that they can use to price things. So as we’re having these conversations – and don’t worry Lexy and I bring up questions all the time on this podcast and we know that we’re not necessarily going to be able to answer it – but I feel like that’s one of those places where there’s a question. Is collecting that data really something that you can do responsibly and have accountability for because like once it’s inside somebody’s home, should that be private? Should that not be part what you use for figuring out pricing of an insurance policy?

Sheila: So, the answer is… Yes. These are the questions of the digital age. This is what we mean when we say data use needs to be human centered and people need to have the ability to participate. Our data laws, our data regulations need to be youth specific. Let me give you another great example.

Sheila: Smart Medicine and I love the one proteus done by my dear friend and colleague, Dr. George Savage. And it’s essentially, and I told this story many times cause it’s so exciting and it is really digital future. The sensor, the size of a grain of sand embedded in a med where they layer between the filament and magnesium and copper. And then when you ingest it, there’s a wearable that sticks onto your ribs. And when your stomach acids dissolve the layer and the two filaments touch, it begins to conduct and it, and your wearable knows that you took the medicine and then it begins to measure your body’s reaction.

Sheila: Now it also has an app that goes into Bluetooth. So your patch, your wearable transmits to an app with the diagnostics. The patch also has an accelerometer, which is sort of like a gyroscope. So it knows if you’re walking, if you’re prone, if you’re upright, where you are, what you’re, what you’re next to. So it’s got this massive diagnostic capability. So we go from a world where Sheila goes to see the doctor, I don’t know, once a year, once every six months. And I get, you know, I have a verbal exchange with a doctor. How’s your medicine working? Not too good doc. Because guess what? I didn’t take it very well. And he goes, well, let’s up your dose. So we have some my dose, I take my medicine, I feel terrible because it’s not the right dose for me, so I quit taking it and then I don’t get any benefit.

Sheila: And the efficacy, the statistics I’ve heard is for medical protocol for Med adherence, people take their medicine the right way about 14% of the time. When you have smartness that’s really low when, when when you have this observed state, right? Because observation changes behavior. When you have observation, adherence jumps up to about 80 plus 84% so all of a sudden we know if medicines work, which humans they work on and don’t work on a, we can begin to understand population health. We can understand other correlative or maybe even causal factors in play. We can understand dosage levels. We can really revolutionize the human health and wellbeing experience. However, what if that data that has so much promise for good was used for bad? I tell you that story to say various to your point, data use in context where the benefits are accruing to us and we have an ability to participate and not just the harm.

Sheila: And these are the kinds of questions that have to be asked and we have to consider within the human lens that these are the questions of the digital age, that smart water for water conservation. Great for you know, Home Insurance, uh, you know, for, you know, I get a lower rate if I have a, you know, smart water device in my home. But is it okay to to put it into actuarial tables for my health and life insurance? Maybe not. Maybe I should be able to choose that because it is a sensitive material use of data, but that lights up where we are in the world of data and data science and why you keep hearing me talk about human-centeredness and you know, placing all the human dignity on the data because it’s an ash abstract of humans. It’s an abstract of us.

Lexy: In those examples, there are a couple of things that stood out to me. One is the sharing of information beyond its initial intended purpose. So in the smart water scenario, the initial intention was for you to have better visibility into your water use, water conservation efforts. But at some point we presume that the insurance company is not the one who created this smart water monitoring device and so at some point there was some sort of sharing agreement that you may or may not have consented to. To your point earlier with regard to consent.

Lexy: On the other side of that, in the medical use, it occurs to me that if we were to say in order for you to receive this medication that we think is beneficial, you have to acknowledge that you’re going to be monitored. You have to have this wearable device on you at all times. Either it’s now embedded, which is a surgical procedure or it’s something you have to be actively wearing. You have to agree to have the app on your phone. You have to agree that you will be monitored for every time you take this medication. And so at what point do some of these things become a barrier to the benefit because you want to acknowledge you want that consent, you want that active participation as a digital citizen, how much does that create a barrier to the benefits?

Sheila: Okay. I boy that’s a big question. I don’t have all the answers today, but you are asking the right questions and, and this is again, this is the body of work ahead of us, right? We’ve got to come up. We need a federal law. We need an accountability-based federal law. We need to understand that this is incredibly complex and nuanced. Again, as we accelerate into digital age, I’ll say this – data is not just big, it’s infinite. Data is infinite. Digital is inevitable.

Sheila: That we are and we’re becoming digital natives are. How many smart devices do you have at home? Countless. I have countless, I can’t even, you know, the list goes on. Smart Thermostat, smart TV, smart coffee pot, smart refrigerator, smart cameras, smart gates, iPads, phones, watches. I mean wearable. I’ve got a smart Bra. I mean it’s, data is infinite and digital is inevitable, but still the data use and the laws that regulate and the guidelines that we implement and how we hold collectors, innovators, users of data accountable, we need to keep the framework that it’s good for people. And when there are these, what I’ll call material uses, things like determining my insurance rates or my credit or my employee ability or my, you know, getting a mortgage or getting an apartment rented or you know, these material uses. They are that we need some extra guard rails.

Sheila: We have to judge data use and data science in context and it’s highly complex. We need a federal law that will allow for innovation to continue and position America to be competitive on a global scale because the world is globalizing. But we don’t have all the answers today. We do have a lot of the questions today and the, the foundational guard rails is as we design what these regulations and requirements look like, let’s don’t do anything artificial. Let’s don’t disrupt innovation disrupt competition. And that means disrupting data flow. We cannot do that. Instead we have to create this accountability construct so that if you received data, you’re still responsible for being accountable for good purpose and detecting and preventing harms. And then we begin to define what’s in the harm bucket and what individual participation goes with that and what’s in the benefit bucket. And how those participation constructs work.

Lexy: There are a couple of things that I’d love to wrap up. The first is, we’ve talked about a lot of different scenarios and uses of data. There’s so many things. You’ve mentioned so many smart devices out there today. If there were one thing that you, Sheila, could build based on technology, what would it be and how would you ensure that it is ethical in the data use of what it collects?

Sheila: Well now like a device or something?

Lexy: Yeah, whatever it might be.

Sheila: Oh my goodness.

Lexy: What isn’t yet a smart device in your home that you wish were a smart device?

New Speaker: Um, I don’t know. We’re pretty digital, you know, I’ve got two teenagers. We’re pretty digital native. You know what I wish I had. Um, I do have one of those smart, the, you know, the in-home agents and I worry a little bit about, that’s a little opaque.

Sheila: So we don’t use it and it doesn’t, it’s not as smart as I want it to be. So I would like, I would like an agent that does much better voice command that really we achieved that thing called placefulness. Have you heard that construct? You arrive at a place in time and you’re recognized, known, treated and optimized to your benefit. Um, I, I think to have that in your own homes. So, but in a way that’s because that is it Marie made that point earlier, you know, the privacy of your own home, your sanctum, um, spending data, you know, back and forth to, you know, the, the, the mother ship as it were for whatever your devices are. I think we’ve seen kind of an opaque experience and, um, not as much participation as I would like that. I think better voice command is where I would, I would like to see a scope for that matter.

Sheila: You know, I’ve got a new vehicle and in my mobile super computer aesthetic more, um, uh, I do a lot on my smart device. I do a lot of voice commanding on the, on the phone. It’s not great. It’s not perfect. I have had to wait till stoplight or pullover or wait to destination read that. So I think is the way of the future.

Sheila: I think voice actually is super special. Think about when we get voice perfected and we get translation to go with that. Every person in the world, we all speak, we may not all can read and write, but think about how it will allow people who cannot read and write but have a smart phone to begin to participate in the global digital economy by voice. It eliminates, it overcomes these, uh, barriers of socio economic disadvantage in a way that few few other things do because we speak as humans.

Sheila: So I think if I were going to innovate an ask for something, I would ask for an amazing voice capability with all the right controls and data stewardship to back it up.

Lexy: Fantastic. I look forward to the universal translator of Star Trek era coming soon from Sheila.

Sheila: Hahaha. You guys are so fun. Thank you for a really, really fun morning. I appreciate it.

Lexy: My pleasure. Thanks so much for joining us. Any final thoughts or any question that you just have always wanted to answer on an interview that has not been asked?

Sheila: I think you, I think you wrapped it up pretty good. Well, we’ll get together when we have the universal translator and the ultimate voice control and I and I think that will bring the world together yet better and we’ll all be global citizens and we’ll be able to address very important humanitarian issues of poverty and disease and disadvantage. I think we’ve got a lot of problems in the world that need to be fixed and I think data technology and data science can do that.

Lexy: Thank you so much for joining us Sheila. This has been Lexy Kassan.

Marie: and Marie Weber.

Lexy: Thanks everyone for joining us.

We hope you’ve enjoyed listening to this episode of the Data Science Ethics podcast. If you have, please like and subscribe via your favorite podcast App. Also, please consider supporting us for just $5 per month. You can help us deliver more and better content.

Join in the conversation at datascienceethics.com, or on Facebook and Twitter at @DSEthics where we’re discussing model behavior. See you next time.

This podcast is copyright Alexis Kassan. All rights reserved. Music for this podcast is by DJ Shahmoney. Find him on Soundcloud or YouTube as DJShahMoneyBeatz.

0 0 votes
Article Rating
Tags: , , , , , , , , ,