Senior Congressional Staff Visit Stanford for the Cyber and Artificial Intelligence Boot Camp
Congressional staffers with Herbert Lin, Andrew Grotto, Reid Hoffman, John Ethchemendy and Michael McFaul before the keynote dinner on Tuesday, August 26th
The Cyber and Artificial Intelligence Boot Camp marked the first major event since the launch of the new Stanford Cyber Policy Center at the Freeman Spogli Institute for International Studies. “After three earlier boot camps, CBC 2019 was the best boot camp ever,” said Herb Lin, who led the program alongside Andrew Grotto, Director of the Program on Geopolitics, Technology, and Governance at the Cyber Policy Center.
Capping the first day of sessions, congressional staff participated in a simulated ransomware attack on a fictional hospital. The staffers, playing various departmental roles, were tasked with making a critical decision: pay the ransom or not? Armed with relevant background information and with direction from simulation writers and volunteers from Stanford Health Care, participants had to figure out how to protect patient data, provide surgical care and avoid a major PR disaster. "Of particular interest was their reaction to being in a situation where they had to answer urgent questions from the simulated hospital board of directors, rather than being the ones asking those questions. Hopefully, greater empathy was one result of the exercise!” said Lin.
The intense ransomware simulation was just one of the challenging issues congressional staff explored over three days of sessions with Stanford experts, Silicon Valley entrepreneurs and policy veterans. “As a policy person, I was 100% against it [paying the ransom], but in the hospital-business role, I had to be for it. Pulling off the policy hat was the biggest takeaway for me,” said a staff member who participated as part of the hospital’s IT Operations & Engineering Team.
“Data is the New Seawater”
In sessions on cyber security threats, Kevin Mandia, CEO of FireEye, laid out the threat landscape, current trends and capabilities, and future outlook of malicious cyber activity, providing a readout of an internal FireEye report detailing the threats of the last 30 days. He provided insight into how a cyber actor’s tactics, techniques, and procedures (TTPs) are catalogued and categorized for monitoring and analysis.
Panelists spoke of the challenges created by the increasingly sophisticated methods employed by bad actors. Sean Kanuck, visiting fellow at the Hoover Institution and former U.S. National Intelligence Officer for Cyber Issues, explained that while there’s lots of data available for use in creating solutions to combat bad actors, the sheer volume is a challenge.
“Data is the new seawater. Though there’s plenty of it, it does you no good for hydrating your body until you do all the work getting it ready to drink...whoever can cleanse it and use it effectively wins the war,” Kanuck said. Data was also discussed in the context of insurance, privacy, regulation and information operations.
The ‘Hands on Hack Lab’ led by Alex Stamos, Director of the Internet Observatory at the Cyber Policy Center, gave attendees a chance to become amateur hackers. Given a relatively simple set of tools, staffers attempted to access webcams and router information. His point was quickly made: hacking is easier than you think. He also highlighted how little was known about security when the internet was in its infancy, contrasting it to the high volume of nation-state cyber activity now happening today.
Keynote Dinner Explores Perspectives on Governance of Artificial Intelligence
For many speakers focused on the future of AI, the importance of a human-centric approach was top of mind. Speaking at the keynote dinner with 150 guests in attendance, John Etchemendy, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), said what he worried about most was AI creating more inequality. He underscored the importance of studying and addressing the human impact. As AI proliferates, Etchemendy told the audience, there's a risk “the financial benefits to the human labor force go down, leaving more money in the hands of the machine owners.”
Michael McFaul, Director of the Freeman Spogli Institute and moderator of the keynote, asked about the friction between the speed of AI technology in Silicon Valley and the constraints of government regulation. Reid Hoffman, Partner at Greylock Partners and co-founder of LinkedIn, encouraged the audience not to let fear get in the way of further exploration. “If you were to say, ‘we’re not going to accept the development of this electricity thing, unless you can prove it’s never going to be used for anything bad’ we’d have difficulty having dinner tonight. You can’t prove zero negative application. What you have to do is realize that it’s massively beneficial on average.”
Clearing a Path for Discovery
In a session on AI and machine learning, assistant professor of computer science Dr. Emma Brunskill, highlighted both the potential benefits of AI, as well the challenges: machine learning can encode biases or be unfair to certain groups, algorithms can amplify bias and data sets can be unbalanced. “AI is only as good as the people doing the work,” Brunskill argued.
Dr. Fei Fei Li, Co-Director of HAI, stressed the potential benefit of AI in public health. A simple network of sensors in hospitals, for example, can be used to track which hospital staff aren’t meeting hygiene standards. With that information, said Li, “hospital staff can receive real time reminders to wash their hands,” potentially reducing the number of hospital acquired infections, which she pointed out, kill more than than twice the number of people in the U.S. as car accidents. Li urged congressional staffers to help clear a path for researchers and entrepreneurs to discover new uses for these technologies. “We are witnessing the beginning of electricity or the mass production of automobiles,” she said. The only way to harness the power and potential of AI she argued, is through smart strategy, policy and infrastructure. “We can’t delay.”
Before heading back to Washington, D.C. the staff participated in a briefing and facility tour at the Center for Automotive Research at Stanford (CARS), led by Executive Director Stephen Zoepf, where they saw vehicles used for testing autonomous systems and a driving simulator used to study human driving patterns.
Congressional staff member in CARS driving simulator, Wednesday, August 28
The boot camp was co-sponsored by the Hoover Institution, the Cyber Policy Center at the Freeman Spogli Institute for International Studies, and the Stanford Institute for Human-Centered Artificial Intelligence. "The boot camp is one of the signature elements of Stanford's leadership at the intersection of technology, governance and cybersecurity,” Grotto said.