Warning: include_once(/hermes/bosnacweb04/bosnacweb04ca/b1760/ipg.travel2cali31472/wp_site_1701830287/wp-content/plugins/wp-super-cache/wp-cache-phase1.php): failed to open stream: No such file or directory in /hermes/bosnacweb04/bosnacweb04ca/b1760/ipg.travel2cali31472/wp_site_1701830287/wp-content/advanced-cache.php on line 22

Warning: include_once(): Failed opening '/hermes/bosnacweb04/bosnacweb04ca/b1760/ipg.travel2cali31472/wp_site_1701830287/wp-content/plugins/wp-super-cache/wp-cache-phase1.php' for inclusion (include_path='.:/usr/share/php') in /hermes/bosnacweb04/bosnacweb04ca/b1760/ipg.travel2cali31472/wp_site_1701830287/wp-content/advanced-cache.php on line 22
You don’t need to understand AI to enjoy its benefits – Custom Self Care
Home Productivity You don’t need to understand AI to enjoy its benefits

You don’t need to understand AI to enjoy its benefits

0
You don’t need to understand AI to enjoy its benefits

Dr. Fei Fei Li emigrated from China to the United States when she was 15 years old. She balanced her education with working odd jobs to earn extra money, and at one point moved her parents into her dorm room as a graduate student at CalTech. Today, she is known as the godmother of artificial intelligence, one of a handful of researchers who laid the groundwork for the AI revolution. She’s a professor of computer science at Stanford as well as the director of Stanford’s Human-Centered AI institute.

In her new book, The Worlds I See (Flatiron Books), Dr. Li weaves her personal history of immigrating to the United States and finding her footing in the world of science, and a discussion of how her research evolved, with a clear and accessible history of AI. The manuscript is luminous—elevated by her passion for science, her bone-deep humanity, and a strong conviction in the beauty of the world.

Fast Company chatted with Dr. Li about what it means to educate the public on AI, have better conversations about it, and better regulation. Throughout our conversation, she focused on how AI can be used to empower humanity. She’s passionate about ensuring that human agency always remains valued and says that AI should not be feared or even revered, but simply understood as a tool that can serve human interests.

Isaac Bashevis Singer wrote: “The more technology, the more people will be interested in what the human mind can produce without the help of electronics.” What value do humans bring to the table?

Great question. I want to begin with human agency. Humans are very complex beings. We’re not just defined by one side of our brain or by the way we compute on big data. We’re not defined by our memory load or whatever algorithm that is in our own neural networks. We are defined by our will, by our emotion, by our agency, by our relationships with ourselves and with each other. If there’s one thing I find myself busy communicating as an AI technologist these days it’s that we need to have confidence and self-respect for ourselves, because we are not the same as a computing tool.

Flatiron Books

Why are these human abilities important?

It’s important to have a measured view of the tools. They are very powerful, but I want to underscore that they are tools. So this is maybe a nerdy way of thinking about it, but I use the survey of American time use in my research. It keeps track of what Americans are doing with their time—work, entertainment, leisure, chores. There are thousands of tasks. I’m not trying to downplay the technology, but it’s very limited in what it can do compared to humans. I think a very important part of being human is figuring out our relationship to the tools we’ve created. That’s an important task human civilization has always faced. Sometimes, we do a good job. Sometimes, we don’t. We need to recognize that relationship and have this agency to determine how this relationship should go.

Companies worry that guardrails on AI will slow down innovation. What are your thoughts on how to balance the speed of innovation versus safety?

I wanted to say it’s a million-dollar question, but I think it’s a trillion-dollar question. It is very, very important we figure that out. And it’s going to be an ongoing iterative process. I don’t think there’s a one-shot simple answer, and I frankly feel anyone who is out there saying [they] have this one solution captured in one or two sentences is not facing the reality. We need both. The innovation will bring discovery, will bring jobs, will bring better productivity, better health, better education, a better environment. That’s absolutely a foreseeable future. But in the meantime, we also need guardrails that protect lives, human dignity, especially for people of underserved communities, and the values we care about as a species. As a technologist, as an educator, I would be concerned if any advocacy voice was leaning to one extreme or the other.

How do we design good guardrails?

That’s something I’ve been grappling with over the past five years. That very question prompted me to establish this human-centered AI Institute at Stanford, which means putting the well-being of individuals and society at the center of designing, developing, and deploying this technology. Designing and implementing good guardrails is so complex. I think it will take a framework, which has a balanced view of the technology and the guardrails, puts ethics into the design of the technology and has a committed-to-stakeholder approach that considers individual, community, and social impact.

What are the biggest gaps in the general public’s understanding of AI versus experts?

Frankly, there are a lot of gaps. This technology is so new that the education level isn’t that high, which makes sense. How long did [it take for the public to understand] electricity, for example? We have to give it time to educate the public. Right now the public has been misled or is focused on the wrong issues. That’s not through anyone’s ill intentions, but lack of education.

There’s also a big gap in the voices we’re hearing. There are so many incredible researchers, entrepreneurs, technologists, educators, and policymakers focused on creating better, say, medicine or agriculture with AI, but we’re not hearing from them. It’s a very small, winner-takes-all group that gets the megaphone. That’s a disservice to the public.

What are some of the biggest misconceptions you’d like to set straight?

For example, it will be very interesting to have more nuanced communication about what these large language model are helping. There is a jump between “large language models exist” and then “all human agency is gone and no one needs to study English anymore.” It would be really interesting to look at how businesses are using large language models. I actually doubt it’s a situation where you turn on a large language model, and then you leave the room and let it run its course and the product or whatever you’re making is done. Large language models are like a glorified calculator—okay, maybe not the best analogy, but the point is, they are tools and they can be used to super-power productivity.

But we also need honesty about, What is the impact on jobs and wages? How are we using these machines responsibly? But we’re not having that conversation; there’s just clickbait. So if you sample an average American on Main Street and ask them: Where do you read about AI? What have you really learned? From whom? What’s your impression of AI? I think the answer will be fairly thin.

What does a robust education in AI look like?

I actually tend to say I don’t even know how Tylenol works, yet I trust it and use it. So education is nuanced. This technology is so new, everybody thinks unless they understand the math, they don’t understand AI. That’s actually not true. We don’t have to know biochemistry to have a common-sense level of understanding of Tylenol. There’s public education about what medicines are for and what symptoms they treat, as well as what the FDA does, and how I work with my pharmacists and doctors to take Tylenol in a responsible way. The existing public education gives agency to people; it gives them a level of understanding. Then, if you really love biochemistry, go, by all means, and study the pathways of Tylenol’s molecular effects. 

Right now, all of this is missing from AI education. There is a lot of public material now on the technical side, but it’s not very accessible. In a way, that’s the reason I wrote the book—I wanted to discuss AI in an accessible way. We need education and communication with the public to understand how we look at AI from other lenses, like economics or lawmaking.

What consequences are we looking at if we don’t educate people properly?

My greatest fear is that we lose agency. There’s so much fear that we’ll end up in a machine-overlords scenario. That’s overhyped. But if public education is inadequate, that could become a self-fulfilling prophecy. All of us have agency. Policymaking is really important here. It could swing from banning the whole thing completely or letting it go wild and organic, which would be even worse. For the sake of human dignity, it’s important to make sure the guardrails are balanced, so we can use AI to deliver benefits. I wrote my book in hopes that high school students and college students will be inspired by AI and the opportunities it presents.

Whenever I tell people at parties that I write, they tell me I’m going to lose my job to ChatGPT. What should I say to them?

You’re underestimating your own agency! Do an experiment yourself, okay? Take a piece you have to write, and use ChatGPT to write it. Then redline it and see how much you have to rewrite it. Visually show that to people. Look, this is 95% changed. I mean, maybe there are a few things you leave because ChatGPT actually does a good job—but check it out yourself. You’ll find you have a lot of room for agency. You can use AI to lift your work and lift your own agency and not feel threatened.

Source:Shalene Gupta , www.fastcompany.com, 2024-01-04 06:01:00,Source Link