Artificial Intelligence has the potential to disrupt so many different dimensions of our society that the White House Office of Science & Technology Policy recently announced a series of four public workshops to look at some of the possible impacts of AI. The first of these workshops happened at the University of Washington on Tuesday, May 24th, and I was there to cover how some of these discussions may impact the virtual reality community.
The first AI public workshop was focused on law and policy, and I had a chance to talk to three different people about their perspectives on AI. I interviewed the White House Deputy U.S. Chief Technology Officer Edward Felten about how these workshops came about, and the government’s plan for addressing the issue.
I also talked with workshop attendee Sheila Dean, who is a privacy advocate about the implications of AI algorithms making judgments about identified individuals. I also spoke with Ned Finkle, who is the Vice President of External Affairs at NVIDIA about the role of high-end GPUs in the AI revolution.
LISTEN TO THE VOICES OF VR PODCAST
There are a number of take-aways from this event that are relevant to the VR community.
First of all, there are going to be a number of different privacy issues the biometric data that could be collected from virtual reality technologies including eye tracking, attention, heart rate, emotional states, body language, and even EMG muscle data or EEG brainwaves. There were a number of companies at the Experiential Technology and Neurogaming conference who were using machine learning techniques in order to analyze and make sense of these raw data streams. Storing this type of biometric data and what it means could have some privacy implications. For example, Conor Russomanno warned me that EEG data could have a unique fingerprint and so there could be implications of storing anonymized brainwave data because it could still get tracked back to you.
I also discussed tracking user behavior and data with Linden Lab’s Ebbe Altberg, where we talked about the potential dangers of having companies ban users based upon observed behavior. Will there be AI approaches that either grant or deny access to virtual spaces based upon an aggregation of behavioral data or community feedback?
Sheila Dean was concerned that she didn’t hear a lot of voices that were advocating for privacy rights of users in the context of some these AI-driven tracking solutions. She sees that we’re in the middle of a battle where our privacy awareness and rights are eroding, and that users need to be aware of what’s at stake when AI neural nets start to flag us as targets within these databases. She says that consumers need to advocate for data access, privacy notice consent, privacy controls, and for people to be more aware of their privacy rights. We have the right to ask companies and the government to send us a copy of the data that they have about us because we still own all of our data.
Sheila also had a strong reaction to Oren Etzioni’s presentation. Etzioni is the CEO of the Allen Institute for Artificial Intelligence, and he had a rather optimistic take on AI and the risks. He had a slide that labeled SkyNet as a “Hollywood Myth,” and Sheila said that SkyNet is a very real NSA program. She cites an article by the Intercept that there’s an actual NSA program called SKYNET that uses AI technologies to identify terrorist targets.
At the same time, SkyNet is kind of seen as the “Hitler” of AI discussions, and we could probably adapt Godwin’s Law to say, “As an online discussion [about AI] grows longer, the probability of a comparison involving [SkyNet] approaches 1.”
https://twitter.com/adurdin/status/735227827759505408
There have been a lot of overblown fears about AI that have been put out by dramatic dystopian sci-fi dramas coming out of Hollywood. These overblown fears have the potential to prevent AI from delivering all sorts of ways of contributing to the public good from saving lives to making us smarter.
Microsoft Research’s Kate Crawford sees that going straight to SkyNet can suck the oxygen out of the nuances of the issue. She was advocating for stronger ethics with the computer science community, as well as a more interdisciplinary approach to encompass many different perspectives with the AI as possible.
In Alex McDowell’s presentation at Unity’s VR/AR Vision Summit, he argued that VR represents a return to valuing multiple perspectives. Stories used to be transmitted through many generations through oral traditions where tribes would add adapt and change the story based upon their own recent personal stories and experiences.
Alex says that the advent of print, film, and TV, marked a shift where we started to see canonical versions of stories that were told primarily from a singular perspective. But VR has the potential to show us the vulnerability of the first-person perspective, and as a result put more emphasis on ensuring that our machine learning approaches include a diversity of perspectives across many different domains.
Right now AI is very narrow and focused on specific applications, but moving towards artificial general intelligence means that we’re going to have to discover some of the underlying principles that are transferable to building up a common sense framework for intelligence. Artificial general intelligence is one of the unsolved and hard problems in AI, and so no one knows how to do this yet. But it’s likely that it’s going to require cross-disciplinary collaboration, holistic thinking, and other ingredients that have yet to be discovered.
Another takeaway from this AI workshop for me is that VR enthusiasts are going to have the hardware required to train AI networks. Anyone who has a PC capable of running Oculus Rift or HTC Vive is going to have a high-end graphics card, which if you have the GTX 970, 980, or 1080 then these are the same architectures used in NVIDIA’s even higher-end GPUs that are used to train neural networks.
When VR gamers are not playing a VR experience, then they could be using their computers massively parallel-processing capability to train neural networks. Gaming and virtual reality have been a couple of the key drivers of GPU technology, and so AI and VR have a very symbiotic relationship in the technology stack that’s enabling both the AI and VR revolution.
Self-driving cars are also going to have very powerful GPUs as part of the parallel-processing brains that will enable all of the computer vision sensor and continuous training of the neural net methods of driving. There will likely be a lot of unintended consequences of these new driving platforms that we haven’t even though of yet.
Will we be playing VR driven by the GPU in our car? Or will be using our cars to train AI neural networks? Or will we even be owning cars in the future, and instead switch over to autonomous transportation services as our primary mode of travel?
Our society is also within the midst of moving from the Information Age to the Experiential Age. In the Information Age, computer algorithms were written in logical and rational code that could be debugged and well-understood by humans. In the Experiential Age, machine learning neural networks are guided through a training “experience” by humans. Humans are curating, crafting, and collaborating with these neural networks throughout the entire training process. But once these neural networks start making decisions, then humans can have a hard time describing why the neural net made that decision, especially in the the cases where machine learning processes start to exceed human intelligence. We are going to need to start to create AI that is able to understand and explain to us what other AI algorithms are doing.
Because machine learning programs need to be trained by humans, then AI carries the risk that some of our own biases and prejudices could be transferred into computer programs. This a year-long investigation into Machine Bias by Pro Publica, and they found some evidence that the software that predicts future criminals was “biased against blacks.”
AI presents a lot of interesting legal, economic, and safety issues that has the Ed Felten, the Deputy U.S. CTO saying, “Like any transformative technology, however, artificial intelligence carries some risk and presents complex policy challenges along several dimensions, from jobs and the economy to safety and regulatory questions.”
There are going to be a whole class of jobs that are replaced by AI, and one that is probably the most at risk are probably truck drivers. Pedro Domingos said that AI is pretty narrow right now, and so the more diverse set of skills and common sense that’s required to do a job, then the safer your job is right now. With a lot of jobs being displace by AI, then Virtual Reality may have a huge role to play in helping to train displaced workers with new job skills.
AI will have a lot of vast implications on our society, and the government is starting to take notice and taking a proactive approach by soliciting feedback and holding these public workshops about AI. This first AI worksop was on Legal and Governance Implications of Artificial Intelligence, and it happened this past Tuesday in Seattle, WA.
Here are the three other AI workshops that are coming up:
- June 7, 2016: Artificial Intelligence for Social Good in Washington, DC
- June 28, 2016: Safety and Control for Artificial Intelligence in Pittsburgh, PA
- July 7: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term in New York City
Here’s the livestream archive of the first Public Workshop on Artificial Intelligence: Law and Policy
Here’s a couple of write-ups of the event:
- First White House AI workshop focuses on how machines (plus humans) will change government
- What to Do When a Robot Is the Guilty Party
- Artificial Intelligence Is Far From Matching Humans, Panel Says
Darius Kazemi, is an artist who creates AI bots, and he did some live tweeting coverage with a lot of commentary starting here (click through to see the full discussion):
https://twitter.com/tinysubversions/status/735198254648819712
If you have thoughts about the future of AI, then you should be able to find the Request for Information (RFI) on the White House Office of Science & Technology Policy blog here very shortly.
Rough Transcript
[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to The Voices of VR Podcast. Today on the podcast, I'm going to be covering three interviews that I did at the Artificial Intelligence Law and Policy Workshop that just happened at the University of Washington. And so this was a public workshop sponsored by the Tech Policy Lab at UW, as well as the White House's Office of Science and Technology. And so this was just a gathering of different law and artificial intelligence experts talking about the future and the different implications in terms of how we should deal with all sorts of various issues that come up with AI in the future, including privacy and liability and all sorts of thorny legal questions about AI. So this is a series of workshops the White House is doing to gather information in form of a request for information as well as just getting feedback from the academic community. And so in terms of virtual reality, I think that VR and AI are these two revolutionary technologies that are really coming into the mainstream kind of at the same time and there's going to be a lot of overlap between the two. One quick example for people is to think about different experiential technology and neurogaming applications where you're able to take all sorts of raw biometric data and use all sorts of sophisticated machine learning neural networks that have been trained to be able to take a huge stream of data from our EEGs, let's say, and be able to extrapolate our different emotional states from that data. So all sorts of information that is going to be coming available to both game developers and companies that could have various privacy implications. And so I wanted to kind of cover this summit from that angle. And so what I have today on today's podcast is three interviews that I did at this AI public workshop, starting with Edward Felton, who is the White House Deputy U.S. Chief Technology Officer, who helped to catalyze the meeting of this first public workshop. as well as Sheila Dean, who is a privacy advocate who has all sorts of different specific takeaways from the workshop today. Ned Finkel, who is the Vice President of External Affairs at NVIDIA. And so these GPUs from NVIDIA are actually a big part of being able to train these AI neural networks through the parallel processing that's available. So we'll be kind of talking about the role that NVIDIA is playing in this AI revolution that's just happening. So that's what we'll be covering today on the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by the Virtual World Society. The Virtual World Society wants to use VR to change the world. So they are interested in bridging the gap between communities in need with researchers, with creative communities, as well with community of providers who could help deliver these VR experiences to the communities. If you're interested in getting more involved in virtual reality and want to help make a difference in the world, then sign up at virtualworldsociety.org and start to get more involved. Check out the Virtual World Society booth at the Augmented World Expo, June 1st and 2nd. So these three interviews all happened on Tuesday, May 24th at the University of Washington School of Law, located in the William H. Gates Hall. So with that, let's go ahead and dive right in.
[00:03:31.321] Edward Felten: My name is Ed Felton. I'm a Deputy U.S. Chief Technology Officer at the White House, and I'm here in Seattle for a workshop on artificial intelligence that we co-sponsored with the University of Washington Law School.
[00:03:44.666] Kent Bye: Great. So maybe you could tell me a bit about, like, how did this come about?
[00:03:47.587] Edward Felten: Well, this is part of a bigger series of four public workshops that we in the Office of Science and Technology Policy are running, and it's an effort by the administration to reach out to the public and talk about a bunch of issues in AI. This one is about law and governance in AI, but we're also looking at AI for public good, we're looking at the social and economic impacts of AI, and we're looking at issues of safety and control in the technology.
[00:04:13.840] Kent Bye: So I guess there's a kind of a dialogue that's starting here. This panel is kind of like the initial step with that where you're kind of having a number of people talking, but what are the different methods for people to get involved to actually participate in this process a little bit more beyond that?
[00:04:27.324] Edward Felten: Well, there's a bunch of things people can do. They can attend the workshops. There's one on June 7th in Washington, D.C. There's one on June 28th in Pittsburgh and July 7th in New York. All of those will be live streamed so people can watch. We're also going to be soon issuing requests for information, what in government we call an RFI, that is asking people to comment on issues about artificial intelligence, to send information or comments to the government. And all of this, we're going to factor all of this stuff into a big public report that we're going to be issuing in the fall.
[00:05:03.498] Kent Bye: What was the thing that really catalyzed this discussion? Because I know there's a lot of things that are happening. There's like this explosion and growth in AI. Perhaps a lot of fears, but a lot of optimism in terms of the potential. So what were the things that were really popping up on the radar that it got to the White House to be able to start to have these discussions?
[00:05:19.515] Edward Felten: Well, we saw interest in AI really all over the place. There's a lot more discussion of it in the press. There's a lot more investment from the industry. We saw applications of AI popping up across the government and a lot of AI-related public policy issues. Questions around autonomous aircraft and self-driving cars and all kinds of other things that came up. And there was a certain amount of chatter about some major fears that a few people have had about AI related to things like jobs or the long-term impacts of very intelligent systems. And so we felt it was a good time to talk to the public and for government to engage and try to up our game in terms of how we deal with this.
[00:06:02.094] Kent Bye: Do you think that there were any clear outcomes in terms of AI and public policy that were kind of discussed here today?
[00:06:09.533] Edward Felten: Well, we had a bunch of interesting discussions about questions of law, questions of liability, or what is the role of government. We weren't trying to get to a solution to any of these issues today. The goal was to have a good conversation, to hear from people who come from different kinds of expertise, and I feel like we really succeeded at that today.
[00:06:29.598] Kent Bye: It seems like artificial intelligence is a little bit of like this black box where there's these neural nets and they're trained and we don't really actually know what's happening. It's not like we're programming them with algorithms. And so it comes into all sorts of interesting implications of that, of liability, but also just open questions about what are the legal implications of something that we may not even fully understand.
[00:06:50.650] Edward Felten: Well, for sure one of the challenges of artificial intelligence is the complexity of the technology and sometimes the difficulty of understanding why it did what it did. It's kind of inherent in the nature of intelligence. If somebody or something is smarter than you at something, then you probably won't be able to predict everything that it's going to do. But still, we need to be able to deal with it. We need to be able to think about the safety of these systems. We need to understand how to apply the law. And as a government, we need to understand how to use the technology so we can do our own job better.
[00:07:24.650] Kent Bye: And it seems like the technology moves at such a pace that's much faster than our legal or political system. And so how do you deal with something that's growing exponentially, but yet a lot of the normal sort of everyday person may not understand? It takes a very high level of understanding within the Silicon Valley and the PhDs and computer scientists. And then you're talking about taking these issues into Congress and the White House and making decisions about something that we may not really even fully understand yet.
[00:07:52.469] Edward Felten: Well, the pace of change is a big issue. We need to make sure that we're talking to people who are at the cutting edge in the field. We need to make sure we have people in government who are at the cutting edge and that senior decision makers are getting good advice based on the best technical information. It is often true that the law changes more slowly than technology, but that's one of the virtues of the law is that when it's working well, it provides a flexible framework that can be applied even as conditions change.
[00:08:23.355] Kent Bye: And so, you know, was there another meeting that was a private meeting or other things that were happening today with other discussions? And maybe you could talk a bit about like, what else happened today?
[00:08:31.581] Edward Felten: I mean, there have been, we've had private meetings with various parties. We've been invited to some private meetings, including today. And we're really interested in having whatever conversations we can have, because we're trying to meet as many people as we can and learn as much as we can.
[00:08:47.527] Kent Bye: Well it seems like this is something with self-driving cars with there's certain fields of very narrow specific jobs that may be put in danger and so how do you see that moving forward in terms of like trying to map out the landscape and inform the public and to you know try to deal with the impact of this changing landscape?
[00:09:06.831] Edward Felten: Economists have looked at the question of what the impact on jobs will be, and I think it's clear that some jobs will be displaced and some new jobs will be created. And from a policy standpoint, the challenge is how do we make sure that The kids today are being educated for the jobs and for the life that they are going to be living in the future. And how do we help people whose jobs have been displaced get trained and find the new jobs that become available? And that's not a new policy issue for us as a government. Technology has been changing the nature of jobs for a long time, and we've had programs in this administration and previous administrations to try to deal with that and to try to make sure that people are able to continue to live fulfilled lives and to have meaningful work, even as technology changes.
[00:09:58.566] Kent Bye: Another big topic that came up was big data and privacy. What are some of the privacy and big data implications when you're looking at artificial intelligence?
[00:10:06.639] Edward Felten: Some AI systems, especially ones involving machine learning, rely on having large data sets for training purposes. And so that raises big data issues. And to the extent that data is about people and their behavior, then that can raise privacy issues as well.
[00:10:22.846] Kent Bye: So what's next then? Going from here, then what happens?
[00:10:26.188] Edward Felten: Well, we have three more workshops coming up and we have the request for information. We expect to get a bunch of comments from the public and we're going to continue to learn. We're working within the government to try to mobilize across agencies and make sure that we are thinking in a systematic way across government about how to use AI and eventually we'll be issuing a public report and a national research and development strategic plan in AI and machine learning.
[00:10:54.338] Kent Bye: Great. And so, what do you personally think is kind of the ultimate potential of artificial intelligence and what it might be able to enable?
[00:11:01.780] Edward Felten: I think AI is going to change our lives a lot in the long run, and some of the implications are hard to predict. If you look just at a technology like autonomous vehicles, self-driving cars, They're not only going to save lives and give mobility to people who can't currently drive, they also I think generate a lot of economic opportunity because they will drastically lower the cost of delivering things and especially in an urban environment or a place where a population is fairly dense. I think a lot of new economic opportunities open up in the same way that Things like online auction sites or the ability of people to start a small business and sell things online opened up whole new economic opportunities. I think that the availability of autonomous vehicles for delivery is going to make a huge difference. That's just one example.
[00:11:55.255] Kent Bye: Any other final takeaways from today's summit?
[00:11:58.556] Edward Felten: I thought it was a really interesting conversation. Not only the sessions itself, but also the conversations during the breaks at the coffee and so on were very interesting and informative. And I've gone home with not only a lot of good ideas, but a nice pile of business cards of people I can talk to and get more information.
[00:12:17.383] Kent Bye: Awesome. Well, thank you so much. Thanks. So that was Edward Felton. He is the White House Deputy U.S. Chief Technology Officer. And I'm just going to go ahead and dive right into the next interview with Sheila Dean, who is a privacy advocate.
[00:12:31.509] Sheila Dean: My name is Sheila Dean, and I'm somebody who's been an advocate for identity policy and the policy of the identified person, like the person who is the identified property in a database, or the identified person in context of a government structure or even a corporate structure. So I have ideas and understanding about what happens to your data when it goes in there, how it may be perceived, and the control issues that orbit around that. And everybody has control issues because they want to control their reputation, they want to control how they're perceived, how they're judged, how they're understood when it comes to their identity.
[00:13:13.249] Kent Bye: And so, you know, it seems like we're in this transition between the information age and to the experiential age, but yet we still have this strong ethic of a lot of business models of the information age are based upon tracking our identities across many different websites, our behaviors, and being able to aggregate a whole bunch of information that probably goes beyond what we even know about ourselves. to be able to then sell to advertisers to be able to target us for different ad campaigns. And so we have a few very large companies that are owning a lot of this data and we don't really have... That's where I'm going to correct you.
[00:13:46.369] Sheila Dean: You own your data. Every part of what you put in that system is your IP. It's your property, not theirs. And what I think got murked and kind of glazed over and not articulated whatsoever during this panel was the property of data ownership. As if there was a debate, you own your data. And the more that someone like you would stay the course and say, okay, I own my face, I own my fingerprints, I own all of my bodily privacy, and this is what this concerns, your face print, your FRT, your fingerprints, and anything that would go into the FBI database, is your property. And if you were to write a letter to the Department of Justice, to their privacy office today, and say, I want you to not use this information, you do not have my consent, you can do that today. Unfortunately, the AFF is now fighting a little bit to make sure that the FBI doesn't have the latitude to make it exempt for purposes of national security or whatever they're coming up with today. There are some legal exemptions that they can try to apply, but they have to go through a process in order to get approval for that. So it's really up to you and people like me to speak up and say, okay, I'm going to self-advocate for my identity and my bodily prophecy to more or less say, when the position of consent is present, I'm going to deny you that consent for things that I don't want you to use my identity for.
[00:15:22.942] Kent Bye: But isn't it also the case that we sign all sorts of terms of service where we surrender our sovereignty of the ownership of our data over to these companies? I mean, there's a lot of big, long legal paperwork that we sign each time we use these services. And whenever we're an authenticated user, then we are on their private property, that then we are submitting our data to them. And so it's a little bit of like we've signed away our right to that data to these companies. That's sort of just my impression of how that's working.
[00:15:52.170] Sheila Dean: Can you say with any certainty? Because I see your face and I see that you're not sure. Because you didn't read the terms of service. And unless you read the terms of service, there is common law. I was recently CIPP certified. You know, one of the principles is that common law would infer that it would be odious or deeply offensive for someone to try to own your face or own you as data, to march into your house and say, I own this house because you used my water fountain. And that that would be the same type of idea. Google saying, well, now I own you. I own the property, the intellectual property of you, and I may do with you what I wish. And if people continue to talk themselves out of it and surrender before the war is over by not reading terms of service and not invoking their rights to intellectual property of themselves, then the war will be over and there will be no fight. So it's more of an intellectual challenge for the person to say, listen, I need to start trying to find a way to negotiate with these companies. And right now there's no negotiating offer on the table. When you get a terms of service, it's never present. So you can write to these companies if they have a lot of intellectual property, so to speak, or prospective identity information about you, you can write these companies and you can ask them to give you what they have. And that's called access. And if they don't give it to you, then you have a battle on your hands, because they think that they own something that they don't really have a right to own.
[00:17:29.737] Kent Bye: So you can go to Facebook or Google or any of these big, large internet companies and say, hey, show me what you have, and they have to give it to you.
[00:17:37.752] Sheila Dean: That is absolutely correct. There was a privacy advocate who used to live in Austin, who moved to the Silicon Valley, who recently got every bit of data that was ever posted or manufactured from Facebook about her. And when she posted that on Twitter, I said, this is amazing. This is awesome. It means that the leverage is shifting for people who know how to ask. Now, she's been a privacy advocate, so she knows how to press the right levers because she knows privacy very well and she probably is also a lawyer, so it helps to have some sort of authoritative means. Just like you would go to a lawyer to ask for your credit history be changed if there's inaccurate information. You may have to go to a lawyer, but you test the system. You test to see if they have good background on you. If they will just give it to you, you don't need to go get a lawyer. But if they don't just give it to you and they say, well, you have to sign a DMCA agreement, do they have a right to invoke that? You know, maybe you're a content creator because right now you're going to create content on somebody's platform in order to do this broadcast. But you still own the content and you make an agreement for use so that the platform has content that they can market for their services, which generates a stream of data based on people who come to the well to listen to that content. So if you get that data based on your effort, free and clear, then you have a relationship with access that's more whole. With AI, it's very top down. And from what I can see today, it's going to continue to stay top down unless people participate in the RFI, which is basically like a massive public input, a federal public input for people to say, well, I'm a person and I need my body intact and I don't want to be attacked by a stray glitchy drone. I need there to be limitations on where my data can go and who gets a hold of it if it's an AI controller. And when I say an AI controller, it would be Google. Google is a huge AI controller. And what they do now is in his own social responsibility. There's no escape. If I can't escape them, they also can't escape me because they have things that belong to me. and it's up to me to go to them and try to hold them responsible. You know, not just as an individual at this point, but I think we all have a duty to do this, but there are people who are specially and uniquely equipped to go in there and say, okay, it's time to have a better policy. It's time to do it more responsible.
[00:20:12.962] Kent Bye: So I was just at the Experiential Technology and Neurogaming Conference and talking to a software-as-a-service company that was essentially taking EEG data and using their own special algorithms to be able to parse through this data to be able to extrapolate different types of emotional states that someone goes through. I can imagine a future where we have so much raw data that is ours, but the interpretation of it, we would have no capability of even making any sense of it. It would just be a ream of ones and zeros that is essentially meaningless to us. But yet, companies that have these AI algorithms are going to be able to extrapolate and make sense of that data beyond what humans can do.
[00:20:51.770] Sheila Dean: And that's the thing, if you have no control over the interpretation of what happens to you, how different is that from a judge who can basically knock over an engineer and say, okay, make it single out all the black folks. You know, how can you make it more literate for the people? So that's where the social responsibility component will come in. You have to develop transliterations or a translation program so that people who are out of that literacy can understand the meanings of the interpretations of their data sets. without being an engineer. And they do it every day. Google does it every day. They have an email program. They build an algorithm to interpret their algorithm. It's not that it can't be done. It's just that they have to have a desire and a political will to do it.
[00:21:41.033] Kent Bye: Yeah, so it's almost like taking the conclusions that they're coming up to. Whatever, if it's an emotion, then mad, glad, you know, happy, whatever that ends up being, that's the data that we should also have access to, not just the kind of raw EEG waves that may come to that.
[00:21:56.378] Sheila Dean: They do distill it. Facebook did. Remember, they had that behavioral program that could tell if you were enraged. They could tell if you were uncertain or in an ambivalent place. They can tell if maybe you were hesitating, you know, so it's very fine. They've got very fine articulation and they can translate that data to marketers and they can translate it to you. Isn't that fair?
[00:22:24.033] Kent Bye: Yeah, and you know, just at this AI public policy summit today, you know, there's a couple of things that I think were a little concerning in terms of just, you know, these different programs to take publicly available data and to be able to try to come up with a terrorist credit card score, essentially.
[00:22:39.121] Sheila Dean: I think you're talking about WorldCheck. WorldCheck had a system which, you know, aggregates data, but then they came up Somehow they came up with a way to put a credential that would zero somebody as a terrorist. And this was negging people out of leasing agreements and jobs and they didn't know what was going on because they didn't have access to the information. because this is a UK company and it's different over there. But the UK has been dealing with this segregation, isolation of information and targeting people based on what they know about you. Your political affiliation is considered sensitive information. Your gender identification also considered sensitive or protected information over there. because they're older culture, they're an older culture. They have more respect for these things as ways that people will discriminate, but not in my backyard. And so the guy who is the landowner says, I don't want this religious person in my building, so I will get rid of them. You know, I've got the data right here. So that goes on all the time. And it also goes on all the time here. But, you know, people think they're largely ignorant of their rights to see these things. The FCRA, If you get denied a leasing application or a job, you can ask to see that data. But most people don't know. And there are laws, there are provisions right there, but people just don't know about these laws.
[00:24:05.477] Kent Bye: Well, it seems like we're on this cusp of a lot of going down this path of almost inevitability that this technology is going in a certain direction. And, you know, just hearing some of the different ranges of utopian visions of just really optimistic Pollyanna, almost dismissal of the threats of something like Skynet.
[00:24:26.654] Sheila Dean: But then... Skynet really is real. Skynet is real. It really is real. Go into your search engine, whichever one you use, and punch in Skynet Israel. And The Intercept did an article on one of the Five Eyes applications, but it's named Skynet. And it is like an AI algorithmic type approach. And it's unfortunate that the guy presenting at the beginning who was from Paul Allen's foundation, may not have known that. But he was kind of brushing it off like, oh, Skynet's not a thing. It's already a thing. And the government's already weaponizing AI. It's already happened.
[00:25:08.760] Kent Bye: So, you know, I guess it's like, where do we go from here then? I mean, I guess like, if that's already happening, then a lot of people, you know, just even when the Edward Stoneman revelations came out that the government is surveilling us, it's sort of like we're surveilled all the time on the web. And so people are just kind of used to that. So they kind of had this thought that, well, if I'm not doing anything wrong, then what's to worry?
[00:25:30.867] Sheila Dean: Well, there's always a lot to worry, but I think if I had a concern, it would be that people are giving up before the battle is over. They're being lightly tossed aside and not questioning it, and not boldly going forth and trying to find out and explore the dark places of where their rights might be hiding. You know, if they're not aware of their rights, They should trust their instincts, their gut, and say, this doesn't feel right to me. This feels wrong. And this is going to touch me. And then they need to go out there and seek those things. then they'll understand where they can rectify the balance. You can rectify the balance, but you would just have to go for the knowledge that you don't have. And part of that is doing things like coming to these seminars and finding out how people perceive you and how people perceive the issues. For instance, Ed Felden did not come across to me as somebody who is particularly jaded or unreceptive to public input, that he would definitely want to hear from people. about any concerns with access or notice or consent. He was a little bit like, you got me on the robes, but if you put those out there when the RFI comes across, so it's the White House Office of Science and Technology Policy blog. There will be something called an RFI, and there you can go and add your input. Whatever it is, all concerns, no holds barred, you can write a paper, you can send it in, you can show up to the Republican, and then you can tell them exactly how you feel.
[00:27:14.842] Kent Bye: So yeah, just to set the context a little bit, we're at this White House event where it's a public policy discussion talking about the law and artificial intelligence. And for you, what is the big takeaway in terms of like, should we be concerned? What do we actually need to be advocating for?
[00:27:30.801] Sheila Dean: People should be self-advocating. I think my whole thing, this entire five, ten minutes we've been sitting here has been about self-advocating. But if I were going to choose a position to advocate for, it would be access, no disconsent, but definitely access to our information and getting a set of controls and a framework in place so that you, if you had questions about what was in a government-led AI, say from the Department of Transportation, because they were talking a lot about AI cars. That's kind of the more neutral one. You know, there's going to be AI cars on the road. If you wanted to know if they had your driver's license information or how many times you drove, just to get a beat on how you're being perceived or what the information is, you deserve to have that information. But if there's not a framework in place so that you can get that streamlined and pointed towards you when you ask for it, say the way Twitter does, like if you wanted to download your Twitter composite, all you have to do is hit a selection button in the back end of your profile and it will send you a file. It'll be your entire Twitter history from that account. The government could do the same thing if it had pertinence to any data collection that they had.
[00:28:45.147] Kent Bye: Yeah, it seems like within the virtual reality community, just to set the context a little bit for the listeners, is that there's all these new companies that are coming out and they could be gathering all sorts of really intense biometric data in terms of either your heart rate, where you're looking with your eye tracking. Well, just from the virtual reality technology is going to have all sorts of affordances to get a vast amount of information that could be extrapolated into our emotional states, our brain waves with certain types of headsets.
[00:29:13.768] Sheila Dean: That's very personal. And I'm sure that people, you know, that's your body. And perhaps maybe they could adapt. Here's an idea. They could adapt some of the HIPAA regulations pertaining to your health information, because that's what it is. it's displaced health information, but maybe they could expand a segment of HIPAA to apply to health information when it's acquired from, say, a self-quantifying device like a Fitbit or maybe a headphone brace that tracks your neural metrics, depending on your consumer agreement that you have with the company. The other thing is that you have to invest. If you're going to spend $500 on a device, if it's going to be taking intimate information from you, you should invest more time in finding out where that information goes and what kind of agreement you are making with this company. If you don't, there's going to be limits to how lawyers can help you when the chips are down.
[00:30:08.151] Kent Bye: What do you think are some of the biggest takeaways that you got from today's AI meeting?
[00:30:13.861] Sheila Dean: The biggest takeaway that I have gotten is that the leadership and I would say the thought leadership at this conference is much more focused on the economic impact. I think they're looking much further ahead to New York. There was kind of a de-emphasizing of law. Towards the end, they really want to see how the economics play out when it's pointed at AI. Job losses, job increases, economic impact, will it boom, will it bust, that kind of thing. Who's going to work, who will not work, who will have the money, who will not have the money, that kind of thing. So that was very present. Also, transport. you know, the liability negligence argument, you know, did my robot do it? You know, did I do it? Did my robot do it? If it's in my hands and it's in my control, was it really in my control when my robot killed this guy, killed this cat, you know, on the road? So that was a really present factor. The other takeaway I had was that, and it was an uncomfortable one for me, and I could totally be misperceiving it, but It was that the identified person is going to be perpetually underneath all of this unless they get some advocacy in order in a short way.
[00:31:33.723] Kent Bye: Awesome. And finally, what do you see as kind of the ultimate potential of artificial intelligence and what it might be able to enable?
[00:31:40.461] Sheila Dean: Oh my, it's totally unlimited. AI has tremendous power for good. That's the other thing, there was an optimistic bent. I'm definitely not a transhumanist by any stretch of the imagination, but I also use artificial intelligence every day when I use a search engine, so it's kind of a pragmatic arbitration when it comes to living in a society where we incorporate AI. We have to come to a place where, I'll use an example, Google's like the roommate I can't get rid of. No matter how hard I try, it's like he still has the key to my apartment and lets himself in any time he wants. And you know, after a while you just kind of like give up and are like, okay, Well, now we just need to work on our agreement about what you do when you're in my house. So that's kind of where AI is going. AI is going to be in your house. It's going to be in your car. It's going to be in your lessons. It's going to be in your radio. It's going to be everywhere. It's going to be ubiquitous, as they say. But you need to have a say-so in what happens, because that's territorial privacy. it's getting in your space. So you definitely need to have a will, a political will and a voice, yada, yada, yada. I think that's the gist of it, is that AI is going to be ubiquitous and it's going to be in our life. It's going to continue to be in our life. And this will be an economy driver. And it's going to shape the next 10 years of our life, I think.
[00:33:09.065] Kent Bye: Okay, great. Well, thank you so much.
[00:33:11.007] Sheila Dean: Thank you.
[00:33:12.385] Kent Bye: And so that was Sheila Dean. She is a privacy advocate. And I'm just going to go ahead and dive into the final interview from this AI law and policy public workshop. And this is with Ned Finkel, who is the Vice President of External Affairs with NVIDIA.
[00:33:26.689] Ned Finkle: My name is Ned Finkel. I'm Vice President of External Affairs at NVIDIA. NVIDIA is a company that's in its early 20s in the Silicon Valley. We're about $5 billion a year in revenue and about $24 billion in market cap. Our focus has been over many years to build GPUs, graphics processor units. And most people know us from PC gaming. And the people that are most avid, enthusiast people in the world use our products to enjoy PC games. And for many years that's been our focus. In the early 2000s, we turned the use of the GPU, which is a massively parallel processor, to scientific problems, and actually through science grants from the NSF. It was experimented with and eventually we brought those people into our company, but we began to change the use of the GPU not only for just games, but also for scientific use. Now as you spend 10 years forward, GPU accelerators are used in our nation's supercomputers all the way down to cars and autonomous vehicles. So there's a whole bunch of places now that GPUs are used for what we would call deep learning, machine learning, and this whole subject area of AI. And so the product and its performance capability outclass anything else out there, so therefore it's being used in this category.
[00:34:48.835] Kent Bye: So a lot of the minimum specifications for virtual reality is the GTX 970, there's a 980, 980Ti, as well as the 1080. So a lot of these virtual reality PCs that are Oculus-ready and ready for the HTC Vive have these really powerful GPUs there. So what could the virtual reality community do in order to train artificial intelligent networks with their PCs?
[00:35:15.538] Ned Finkle: Well, their PCs are all using the same architecture that we use top to bottom. So they are capable of doing training, ultimately. And the question is just the horsepower that's in each of the platforms. So, you know, I think you would utilize it for the limits of what is possible.
[00:35:33.418] Kent Bye: So at the latest developer conference there at NVIDIA, you made some announcements that there was some new really high-level GPUs that were very specific to artificial intelligent training of neural networks. So maybe you could talk about these super high-end GPUs that were just announced.
[00:35:50.725] Ned Finkle: Um, yeah. And again, cause we haven't announced them. I can't, I can't announce them, but we are building things that are targeted at specific markets and we'll continue to unfold those and at specific times make announcements. But basically our GPU architecture is very flexible and we can target it at certain markets that are necessary. So if we see a market occur that we need to adjust the nature of the architecture, we'll do that. And you'll continue to see those products come out.
[00:36:19.122] Kent Bye: So maybe you could talk a bit about the role of GPUs and why are they being used to train machine learning, artificial intelligence, neural networks?
[00:36:28.564] Ned Finkle: Yeah. Well, a number of reasons. One is just the horsepower of the machine. And they're massively parallel architectures, which lends itself to the problems that AI and machine learning need to do. They need massively parallel architectures. So over time, it's just become the favored platform. We also took the platform and we built an architecture and a whole platform. So the coders can actually code to it. And we created a whole language so we could get the most performance out of that architecture. And we're migrating towards also making it available on all the different software programming platforms people use. We just targeted that market and made sure that we were there, and we have the highest performance architecture by far to utilize for that. And GPUs, because they were used in PC games for many years, we actually build the largest architectures in the marketplace. Our last chip that we just produced now is 15 billion transistors. We are actually building the largest, densest architectures in the industry. And they just happen to be the best by far for using for AI.
[00:37:32.875] Kent Bye: And so, when you say massively parallel processing, how many different individual threads can be happening at the same time?
[00:37:38.417] Ned Finkle: Well, you have thousands of cores, so you could have thousands of individual threads operating. You know, an Intel architecture in comparison is kind of a 4, 8, 16, 32. You know, they're powerful processors, but they're not thousands of powerful processors. So, because of the nature of the architecture, it just lends itself perfectly for AI. And for many years, we've actually been wanting to be the leader in this area, so it wasn't by accident. We spent a lot of time over the years building up courses at the universities. We have over 800 universities we train and have courses at, and building up the whole platform so that you can download it. And we actually put the nature of the architecture in all of our chips so that you could use a desktop PC that's used for PC gaming. Well, when you get done PC gaming, you could also do what is the beginnings of artificial intelligence or deep learning. Your PC gaming machine also doubles at night as a workstation for AI.
[00:38:33.117] Kent Bye: Now, one of the things that I was told by someone who works in the AI industry in Silicon Valley, he said that basically Facebook and Google, there's kind of like this open sourcing of a lot of the cutting edge AI algorithms. And part of the reason is because you need these huge GPU cloud-based architectures to be able to even train up these AIs and a lot of time of testing and getting it wrong until you actually get it right. They may have these algorithms out there, but you need the hardware in order to actually train them up.
[00:39:06.929] Ned Finkle: I would agree with that, but you know, you're, you're getting into probably some areas where you'd want to. A really, a really heavy PhD technical guy to talk to you about, but that's true. You need the horsepower to work on the big algorithms and some of the big algorithms need a lot of data to run by him. And a desktop may not be enough. You'd need a cloud data center or a much larger configuration of GPS to do that. So that's true.
[00:39:30.307] Kent Bye: And so what are some of the customers of NVIDIA with these specific applications of AI? How are they using them?
[00:39:36.630] Ned Finkle: Well, you know, the simple case, the obvious one is the autonomous vehicles. We're pretty much in all autonomous vehicle development right now. All companies that are doing serious AV development are using our platform one way or another. Google's admitted it. Baidu just had an announcement the other day. Volvo is actually taking cars to market here pretty soon in Sweden, the UK, and China. So, AV is one area. Drones, and you can look online for some of those announcements. And then, Google, Microsoft, many of the big data centers and the research organizations are using us. Facebook, they're all announced. pretty much all the big players are using GPUs at one level or another. I mean, it's a primary use product.
[00:40:23.760] Kent Bye: So yeah, I was just at the Google IEO and the CEO of Google said that we're in the midst of just the beginning of this AI revolution that's happening. So from your perspective, what has really changed over the last year or two that's really created this exponential growth within artificial intelligence?
[00:40:40.180] Ned Finkle: You know, and I would agree with that statement. And I think what's happened is that the algorithms have been out there, but the processing horsepower of the machines have not been sufficient to bring them to life. So it's only been in the past few years that machines now are faster at recognizing images. Machines have been able to do the speech translation, you know, language to different languages. things like that that really we had struggled with in the past. So I think the computing horsepower, and you could say aided by Moore's Law, and then also the ability of the platform to be utilized by programmers, which NVIDIA focused very hard on building a total platform solution so programmers could use it, those things have changed. So the horsepower's there, and the programmers have the ability to tap into it.
[00:41:26.171] Kent Bye: It seems like the amount of big data to be able to train these neural networks also, I would imagine, would be a big part of it as well.
[00:41:32.431] Ned Finkle: Yeah, and there's a lot of work in creating bigger and bigger data sets that we can get access to. And I know at the NIH and places like that, they're trying to rationalize the databases so we can actually do more AI and deep learning work on the data sets. And then fields like autonomous vehicles, we're either producing the data set by driving around and or a company like NVIDIA can simulate the problems of driving. And so, yes, it's true.
[00:42:00.525] Kent Bye: So are we going to have cars with GPUs in them, or is the GPUs mostly to train the neural nets, and then once you have the training done, then you just kind of apply it to this computer vision algorithm and you don't need the GPU necessarily on board of the car?
[00:42:15.715] Ned Finkle: Well, I guess the easy answer is absolutely you're going to have GPUs in the car because you can train it on the same product that you actually deploy it. The Drive PX product platform that we showed off recently, that's a 24 teraflop development platform. but you can go to market with the 8 teraflop portion of the platform is ready to go to market. Volvo will be doing that, that actually is a production platform, so they'll be doing both their machine learning on it, deep learning, and then they'll go to market with actually the portion of the platform that's production ready. And we'll see a mix. I think some manufacturers will want to do their own versions. But the fact that the training can then be deployed on the GPUs one-to-one, that's a very efficient scenario. And when you look at the cost of doing it right, I think the GPUs will be a very good deal for the customers.
[00:43:08.541] Kent Bye: So what were some of your biggest takeaways today from the AI and Public Policy Summit that we just had here at the University of Washington?
[00:43:16.034] Ned Finkle: Well, I have to admit, I'm still processing everything that I heard. I think, you know, I'm aligned with the professors from the University of Washington and some of the other speakers in terms of their interpretation of the nature of what we're going into. I think it's an area where we have so much opportunity to make society better. We will have areas of employment and then dislocation of employment. And I think we have to be considerate and thoughtful and make sure that policy moves our education and other things we're doing, thinking about this ahead of time. I don't think it's going to happen overnight. I think this is going to happen maybe quicker to some people, but there's going to be a lot of opportunities that open up that we can see right now. That's what we should be talking about, making sure we train and make available those kind of jobs and education for that. And we can do that now. really applaud the White House for taking initiative to start the dialogue to get these discussions going and get people really focused on what can be done proactively to embrace the technology as opposed to just reacting to it.
[00:44:20.261] Kent Bye: And finally, what do you see as kind of the ultimate potential of artificial intelligence and what it might be able to enable?
[00:44:27.686] Ned Finkle: You know, I have things that excite me a lot, so maybe I'll put it in that category. I really get excited about the advances in medical technology and things that we can do in that area to make medicine cheaper and safer and all manner of things that combinations and permutations of learnings we have that researchers can take advantage of and find breakthroughs that we've been waiting for for decades. I think we're going to advance and areas of cancer, Alzheimer's, so many areas that are tough problems for our society. I think AI holds promise to do that. I think mobility for seniors, young people, and actually changing the paradigm of how our cities and traffic flows work. I think AV is going to help us a lot there. So I think it holds a lot of promise. I'm optimistic that the good greatly outweighs the challenges of this technology.
[00:45:16.157] Kent Bye: Awesome. Well, thank you so much. All right. Thank you. And so that was Ned Finkel, he is the Vice President of External Affairs at NVIDIA. So, there's a lot of content here, and just to unpack it and to add a few other quick takeaways is that, you know, artificial intelligence is going to have a huge impact on our society and there is a request for information that's available for people to start to give their feedback in terms of where this is all going. You know, I saw a different range of people who were extremely optimistic in terms of just saying, hey, don't worry about the Skynet. It's no big deal. You know, it's just like, that's all just Hollywood hype, that these fears are unfounded. And then talking to someone like Sheila who says, no, actually, Skynet is real. There's already actually programs like this that are out there. So another person that I didn't get a chance to talk to was Kate Crawford, and I really enjoyed a lot of her thoughts in terms of the transparency and privacy and just a lot of these implications about artificial intelligence. And I just really enjoyed a lot of her thoughts and what she had to say. If you wanted to kind of go back and look at some of these raw streams of the discussions that were happening here at this Artificial Intelligence Law and Policy Summit, then there is a live stream where all this will be archived, and you'll be able to check out all the different discussions that were happening. In terms of virtual reality, I think that moving forward there's going to be a lot of different applications with artificial intelligence, chatbots, being able to interface within a virtual reality environment. You know, one of the interviews that I did with Philip Rosedale at the SVVR 2016, which I haven't published yet, but one of the things that he said that I thought was really interesting is that Virtual reality may actually be this neutral meeting ground between humans and robots because the robots can't hurt the humans in virtual reality as well as the humans can't hurt the robots. So it'll be like this neutral meeting ground for AI and humanity within VR. So hopefully you enjoyed hearing some of the thoughts and discussion from this Artificial Intelligence Law and Policy public workshop that happened at University of Washington from both some of the participants as well as some of the attendees. So if you enjoy the Voices of VR podcast, then please spread the word. Let your friends know about it. And you can follow me at Kent Bye on Twitter or at Kent Bye on Snapchat, where I'm starting to do a lot more daily video blogging. And if you'd like to donate to the Voices of VR podcast, then please consider becoming a contributor at patreon.com slash Voices of VR.