Tag Archives: Intelligence
More than a third of AI experts surveyed by Pew Research said they are concerned that artificial intelligence will leave humanity worse off in 2030 than they are now, with the majority optimistic that the benefits will make life better for individuals.
Pew surveyed 979 “technology pioneers, innovators, developers, business and policy leaders, researchers and activists,” asking whether they thought that AI advances would leave most people better off by the year 2030. Will it “enhance human capacities and empower them?” Pew asked. Or will it “lessen human autonomy and agency,” leaving them worse off?
Overall, 63% said they were hopeful that people will be better off by 2030, with 37% believing they will not be better off. “Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human,” Pew wrote in its survey findings released this week.
“2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable,” Andrew McLaughlin, executive director of the Center for Innovative Thinking at Yale, said in response to Pew’s question.
Many of those surveyed said the good or bad effects of AI applications will depend on how they are built and deployed. Most of the anticipated benefits of AI center around making people more effective in their work and improving the ability of medical professionals to diagnose and treat diseases.
Among the optimists is Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy, who told Pew, “I think it is more likely than not that we will use this power to make the world a better place. For instance, we can virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet.”
Brynjolfsson also said that humans would need to work to guard against the negative potential of artificial intelligence. “AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons,” he said. “We need to work aggressively to make sure technology matches our values.”
And while AI is expected to create some new jobs as well as make other jobs more productive, some respondents said that it could also lead to widespread job losses, and the sense of meaning that comes with work.
“The answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI,” said Bryan Johnson, founder and CEO of Kernel, a developer of neural interfaces. “I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
Some also saw a potential risk to human liberty if AI expertise widens a gap between the powerful and the powerless.
“AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals,” said Greg Shannon, a chief scientist at Carnegie Mellon’s CERT Division. “Those who know how to use it will have immense potential power over those who don’t/can’t. Future happiness is really unclear.”
A separate report from Diffbot this month estimated that 720,000 people are skilled at machine learning around the world, or nearly 1% of the world’s population.
Until recently, Korean company Samsung was said to behind its competitors in terms of researching and developing artificial intelligence (AI) technology, but the company’s recent strategy suggests that it’s committed to closing the gap and even competing for the top spot. Since 70 percent of the world’s data is produced and stored on Samsung’s products, the company is the leading provider of data storage products in the world. By revenue, Samsung is the largest consumer electronics company in the world—yes, it has even overtaken Apple and sells 500 million connected devices a year. From industry events to setting goals with AI at the forefront to updating products to use artificial intelligence, Samsung seems to have gone full throttle in preparing for the 4th industrial revolution.
Bringing innovators together
Samsung started 2018 with intention to be an artificial intelligence leader by organizing the Artificial Intelligence (AI) Summit and brought together 300 university students, technical experts and leading academics to explore ways to accelerate AI research and to develop the best commercial applications of AI.
Samsung has Dr. Larry Heck, world-renowned AI and voice recognition leader, on their AI research team. At the summit, Dr. Heck emphasized the need for collaboration within the AI industry so that there would be a higher level of confidence and adoption by consumers and to allow AI to flourish. Samsung announced plans to host more AI-related events as well as the creation of a new AI Research Center dedicated to AI research and development. The research center will bolster Samsung’s expertise in artificial intelligence.
Bixby: Samsung’s AI Assistant
Bixby, Samsung’s artificial intelligence system designed to make device interaction easier, debuted with the Samsung Galaxy S8. The latest version, 2.0, is a “fundamental leap forward for digital assistants.” Bixby 2.0 allows the AI system to be available on all devices including TVs, refrigerators, washers, smartphones and other connected devices. It’s also open to developers so that it will be more likely to integrate with other products and services.
Bixby is contextually aware and understands natural language to help users interact with increasingly complex devices. Samsung plans to introduce a Bixby speaker to compete with Google Home and Amazon Alexa.
Absurdly Driven looks at the world of business with a skeptical eye and a firmly rooted tongue in cheek.
Every time a tech company does something patently ignorant or offensive, it’s rarely worth asking the question: “What were they thinking?”
Almost always, the answer is: “They weren’t.”
And they certainly weren’t feeling.
An example is an ad released by Snapchat last week for its “Would You Rather!” game. It asked whether you’d rather “Slap Rihanna” or “Punch Chris Brown.”
In 2009, Brown and Rihanna were involved in a much-publicized incident of domestic violence. Brown was charged with battery.
And this is something to “joke” about?
Please, take a look.
Rihanna responding to Snapchat’s ad. I can’t believe they did this. pic.twitter.com/TpHQIXTm4j
— Gennette Cordova (@GNCordova) March 15, 2018
Oh, Snapchat finally took the ad down and offered some sort of apology.
“The advert was reviewed and approved in error, as it violates our advertising guidelines. We immediately removed the ad last weekend, once we became aware. We are sorry that this happened,” the company said.
You might have thought that it had somehow slipped out without anyone noticing.
Yet this statement suggests that actual human beings examined it and decided it was appropriate for publication.
For her part, Rihanna has now offered a response — remarkably measured, in the circumstances.
She said: “I’d love to call it ignorance, but I know you ain’t that dumb! You spent money to animate something that would bring shame to DV victims and made a joke of it!!!”
This is surely the point. You can’t blame a rogue algorithm here. You can’t blame a malevolent piece of code.
Someone designed this execrable item. Someone animated it and then someone looked at it and approved it.
And no one stopped to think: “This is so thoroughly vile and tasteless that we should all be ashamed of ourselves?”
Shouldn’t all those someone‘s face consequences?
“All the women, children and men that have been victims of DV in the past and especially the ones who haven’t made it out yet…you let us down!,” continued the singer. “Shame on you. Throw the whole app-oligy away.”
Snapchat tried again with, yes, an apology.
A company spokeswoman told me: “This advertisement is disgusting and never should have appeared on our service. We are so sorry we made the terrible mistake of allowing it through our review process. We are investigating how that happened so that we can make sure it never happens again.”
But it will happen again. And again.
Tech companies rely so much on machines that many of their employees think exactly like those machines.
To reinforce the fatal loop, the people who create the code and algorithms behind the machines tend to think like machines, too.
So when decisions are made, any actual human emotions are cast aside. Or never even engaged.
Worse, too many have grown up — sort of — with the belief that you move fast, break things and apologize later.
Well, your PR people pen your apology, while you’re too busy coding.
Apologizing is easy.
Facebook CEO Mark Zuckerberg, for example, has made an art form out of it.
For example, after he and a colleague performed a VR promotion while staring blankly at the suffering homeless of Puerto Rico and high-fiving.
Will this complete blindness when it comes to understanding, appreciating and, frankly, even feeling human emotions ever change?
IPsoft is, in many ways, an unusual entrant into the crowded, but burgeoning, artificial intelligence industry. First of all, it is not a startup, but a 20-year-old company and its leader isn’t some millennial savant, but a fashionable former NYU professor named Chetan Dube. It bills its cognitive agent, Amelia, as the “world’s most human AI.”
It got its start building and selling autonomic IT solutions and its years of experience providing business solutions give it a leg up on many of its competitors. It can offer not only technological solutions, but the insights it has gained helping businesses to streamline operations with automation.
Ever since IBM’s Watson defeated human champions on the game show Jeopardy!, the initial excitement has led to inflated expectations and often given way to disappointment. So I recently met with a number of top executives at IPsoft to get a better understanding of how leaders can successfully implement AI solutions. Here are four things you should keep in mind:
1. Match The Technology With The Problem You Need To Solve
AI is not a single technology, but encompasses a variety of different methods. In The Master Algorithm veteran AI researcher Pedro Domingos explains that there are five basic approaches to machine learning, from neural nets that mimic the brain, to support vector machines that classify different types of information to graphical models that use a more statistical approach.
“The first question to ask is what problem you are trying to solve.” Chetan Dube, CEO of IPsoft told me. “Is it analytical, process automation, data retrieval or serving customers? Choosing the right a technology is supremely important.” For example, with Watson, IBM has focused on highly analytical tasks, like helping doctors to diagnose a rare case of cancer.
With Amelia, IPsoft has chosen to target customer service, which is extraordinarily difficult. Humans tend not to think linearly. They might call about a lost credit card and then immediately realize that they wanted to ask about paperless billing or how to close an account. Sometimes the shift can happen mid-sentence, which can be maddening even for trained professionals.
So IPsoft relies on a method called spreading activation, which helps Amelia to engage or disengage different parts of the system. For example, when a bank customer asks how much money she has in her account, it is a simple data retrieval task. However, if a customer asks how she can earn more interest on her savings, logical and analytical functions come into play.
2. Train Your AI As You Would A New Employee
Most people by now have become used to using consumer facing cognitive agents like Google voice search or Apple’s Siri. These work well for some tasks, such as locating the address for your next meeting or telling you how many points the Eagles beat Vikings by in the 2018 NFC Championship (exactly 31, if you’re interested).
However, for enterprise level applications, simple data retrieval will not suffice, because systems need domain specific knowledge, which often has to be related to other information. For example, if a customer asks which credit card is right for her, that requires not only deep understanding of what’s offered, but also some knowledge about the customer’s spending habits, average balance and so on.
One of the problems that many companies run into with cognitive applications is that they expect them to work much like installing an email system — you just plug it in and it works. But you would never do that with a human agent. You would expect them to need training, to make mistakes and to learn as they gained experience.
“Train your algorithms as you would your employees” says Ergun Ekici, a Principal and Vice President at IPsoft. “Don’t try to get AI to do things your organization doesn’t understand. You have to be able to teach and evaluate performance. Start with the employee manual and ask the system questions.” From there you can see what it is doing well, what it’s doing poorly and adapt your training strategy accordingly.
3. Apply Intelligent Governance
No one calls a customer service line and asks a human to talk to a machine. However, we often prefer to use automated systems for convenience. For example, when most people go to their local bank branch they just use the ATM machine outside without giving a thought to the fact that there are real humans inside ready to give them personalized service.
Nevertheless, there are far more bank tellers today than there were in before ATMs, ironically due to the fact that each branch needs far fewer tellers. Because ATMs drastically reduced the costs to open and run branches, banks began opening up more of them and still needed tellers to do higher level tasks, like opening accounts, giving advice and solving problems.
Yet because cognitive agents tend to be so much cheaper than human ones, many firms do everything they can to discourage a customer talking to a human. To stretch the bank teller analogy a little further, that’s almost like walking into a branch with a problem and being told to go back outside and wrestle with the ATM some more. Customers find it incredibly frustrating.
So IPsoft stresses to its enterprise customers that it’s essential that humans stay involved with the process and make it easy to disengage Amelia when a customer should be rerouted to a human agent. It also uses sentiment analysis to track how the system is doing. Once it becomes clear that the customer’s mood is deteriorating, a real person can step in.
Training a cognitive agent for enterprise applications is far different than, say, Google training an algorithm to play Go. When Google’s AI makes a mistake, it only loses a game, but when an enterprise application screws up, you can lose a customer.
4. Prepare Your Culture For AI As You Would For Any Major Shift
There are certain things robots will never do. They will never strike out in a little league game. They will never have their heart broken or get married and raise a family. That means that they will never be able to relate to humans as humans do. So you can’t simply inject AI into your organizational culture and expect a successful integration.
“Integration with organizational culture as well as appetite for change and mindset are major factors in how successful an AI program will be. The drive has to come from the top and permeate through the ranks,” says Edwin Van Bommel, Chief Cognitive Officer at IPsoft.
In many ways, the shift to cognitive is much like a merger or acquisition — which are notoriously prone to failure. What may look good on paper rarely pans out when humans get involved, because we have all sorts of biases and preferences that don’t fit into neat little strategic boxes.
The one constant in the history of technology is that the future is always more human. So if you expect to cognitive applications simply to reduce labor, you will likely be disappointed. However, if you want to leverage and empower the capabilities of your organization, then the cognitive future may be very bright for you.
“You will get all you want in life if you help enough other people get what they want.”
I’ve followed that nugget of wisdom for almost as long as I’ve been working, and I credit a lot of its impact to the eponymous founder of the Trammell Crow Company, the real estate development firm where I worked early in my career.
I recall back in 1974, shortly after I joined the company, Fortune magazine profiled our founder in an article titled, “Trammell Crow Succeeds Because You Want Him To,” which captured the stories of many people who cheered Crow on to success after success.
It was well known in the industry even then, that Trammell always looked to acknowledge those with whom he worked. I remember one time I commented on a watch he was wearing. The next week, when I returned to my office, I found a new watch identical to it in a box on my desk. No note was attached, but I knew who sent the surprise gift.
I wasn’t alone. Over my 20-plus years at the company, I met financiers and bankers who told me that they were the ones who gave Trammell, a pioneer in commercial real estate, his start with his first loan. These lenders all felt that they were part of his success because he shared it with people and credited them.
This attitude extended to hiring as well. Trammell prioritized human interactions, people skills, situational awareness and emotional intelligence (EQ) in everyone he worked with. Smarts and passion were a given in new hires, but they also had to be the kind of person with whom you’d want to have a beer. They couldn’t be selfish, short-term thinkers, or make their own goals the priority.
The trick, he said, was to find those who have high EQ and want others as well as themselves to win.
I’ve spent a lot of my career looking for these high EQ people for leadership positions and found that they often have the following five traits:
- They’re team players. They don’t have personal agendas that overwhelm others needs. They listen. They don’t view the world only through their own lenses or are overly combative. Unfortunately, even though star solo performers can often do great work on their own, they can also quickly dismantle a team if you’re not careful.
- They’re secure and confident. Brashness usually comes from insecurity. The quiet ones, those who are reserved, stable people, are most often unafraid even under stress. They’re the kinds of leaders people will be willing to follow anywhere.
- They’re visionary. They take the long view. They can “see around corners,” and they anticipate the long-run, second-, and third-order consequences of every action. They also understand the all-things-considered wisdom of reviewing all of the options before them in any given situation.
- They’re nice. They’re kind and thoughtful on a personal level. This behavior is something I saw firsthand in Trammell, but it is something that I truly learned from my mother, who always said that it costs nothing to say a kind word and to lift others’ spirits.
- They’re selfless. They don’t keep score. They do things without any expectation of reward. Those that make helping others succeed a priority often find that it is sometimes repaid. They also know not to change course when the favor isn’t returned. If you can do that, you’ll find legions of fans, friends and teammates who will quietly root for your success.
Conceptually, all of these people-pleasing principles are almost circular in their logic. You’ll want everyone to like you, so that they’ll want you to succeed.
But it’s not that simple.
It’s really about respect. If a tension exists between being respected and being loved, my suggestion is always to choose respect. Eventually, you’ll be loved if you’re respected. But if you’re only loved, respect may not follow.
Remember, too, life is long; it is not a sprint, but is instead an ultra-marathon. You’ll run into the same people over and over, again and again. Make sure they have great memories of you, as a colleague or leader.
They’ll ultimately remember that you were gracious, that you helped them out when they were under pressure, and that you offered a good word on their behalf.
Absurdly Driven looks at the world of business with a skeptical eye and a firmly rooted tongue in cheek.
I’m writing this in a hotel room.
But that’s only because I’ve stayed at this hotel for more than 20 years and I’m a sentimental fool.
Oh, and I once had an indifferent Airbnb experience here in Miami. And the price difference between my hotel and the local Airbnb’s is negligible.
Usually when I’m booking a trip, I look at Airbnb first these days.
I’ve now had many good experiences, both in the U.S. and abroad.
Indeed, I’ve had truly wonderful Airbnb hosts — and apartments — in Oslo and Lisbon, especially — that made me never want to stay in a hotel again.
This doesn’t worry Airbnb CEO Arne Sorenson.
I know this because he told the New York Times’s delightful Ron Lieber that he’s never even tried Airbnb.
In my naïveté, I’d always thought that one of the principal jobs of a CEO was not just to know your competition, but to get your hands on its product.
Yet Sorensen explained that “his daughter has [tried Airbnb]. She told him he had nothing to worry about.”
What a relief.
Whenever I wonder about a product, all I do is ask one of my Starbucks baristas. If they don’t like it, I know it’s no good.
Sorenson, though, expanded his views on Airbnb: “They were the toughest competition when they were offering a true sharing-economy product. The more they get to offering dedicated units, which they’ve done as they’ve grown, the more they look like the competition we’ve faced for decades.”
However confident you might feel, the reek of complacency can incite the flames of arrogance.
It’s true that Airbnb has strategic challenges.
It may be that ultimately it becomes something different from its current persona.
What it surely does have, though, is millions of users who have been only too happy to get away from the disturbingly inflated pricing and the nauseating nickel-and-diming of too many hotel groups. (Hey, don’t you just love resort fees? In New York.)
These Airbnb users now have an emotional relationship with the brand. Yes, just as they might have once had with certain hotel brands.
I used to seek out the W Hotels (now owned by Marriott) in cities. I can’t remember the last time I stayed in one.
Airbnb guests are still prepared to forgive its occasional errors, partly because, as they travel the world, they find hosts becoming friends and apartment life becoming far more enjoyable.
Even if they have to make their own beds.
On my last stay in Lisbon, we had a gorgeous 800 square foot apartment in the middle of the city that was one-third of the price of nearby hotel rooms.
Still, some people at Marriott have apparently tried Airbnb.
The company’s global brand officer Tina Edmundson told Lieber that it was “OK” and “fine.”
I know that when my girlfriend tells me something is fine, it means it’s anything from dull to disgusting, depending on her precise intonation.
Edmundson expanded on her thoughts by admitting that “her standards might be particularly high.”
Or snooty, you might fear. Or just a touch OCD.
“I like the notion that someone professional has been in and cleaned it,” she told Lieber.
Coincidentally, here’s a disturbing headline I just read: “COURTYARD MARRIOTT. YOUR BEDS REALLY BUG ME …Allegedly Bitten Guest Sues.”
Here’s another one along the same lines.
Marriott isn’t, of course, the only hotel group to be subjected to such issues. Even Airbnb isn’t immune.
The point, though, is a simple one.
A new competitor comes along. It’s doing something different. People have warmed to it. Learn from that by really getting to know it and why people love it.
New competitors can be easy to dismiss. Ask then-Microsoft CEO Steve Ballmer.
Yes, that iPhone thing was a real joke.
Imagine what Microsoft shareholders and employees thought about those words a few years later.
Imagine, too, what Marriott — and newly-acquired Starwood — customers might think of Sorenson’s, um, (over?)confidence.
If you’re the boss, it’s incumbent upon you to experience, not just read or hear about, your competitor’s offering.
You never know what you might discover.
Humility, for example.
There’s a revolution afoot, and you will know it by the stripes.
Earlier this year, a group of Berkeley researchers released a pair of videos. In one, a horse trots behind a chain link fence. In the second video, the horse is suddenly sporting a zebra’s black-and-white pattern. The execution isn’t flawless, but the stripes fit the horse so neatly that it throws the equine family tree into chaos.
Turning a horse into a zebra is a nice stunt, but that’s not all it is. It is also a sign of the growing power of machine learning algorithms to rewrite reality. Other tinkerers, for example, have used the zebrafication tool to turn shots of black bears into believable photos of pandas, apples into oranges, and cats into dogs. A Redditor used a different machine learning algorithm to edit porn videos to feature the faces of celebrities. At a new startup called Lyrebird, machine learning experts are synthesizing convincing audio from one-minute samples of a person’s voice. And the engineers at Adobe’s artificial intelligence research unit, called Sensei, are infusing machine learning into a variety of groundbreaking video, photo, and audio editing tools. These projects are wildly different in origin and intent, yet they have one thing in common: They are producing artificial scenes and sounds that look stunningly close to actual footage of the physical world. Unlike earlier experiments with AI-generated media, these look and sound real.
The technologies underlying this shift will soon push us into new creative realms, amplifying the capabilities of today’s artists and elevating amateurs to the level of seasoned pros. We will search for new definitions of creativity that extend the umbrella to the output of machines. But this boom will have a dark side, too. Some AI-generated content will be used to deceive, kicking off fears of an avalanche of algorithmic fake news. Old debates about whether an image was doctored will give way to new ones about the pedigree of all kinds of content, including text. You’ll find yourself wondering, if you haven’t yet: What role did humans play, if any, in the creation of that album/TV series/clickbait article?
A world awash in AI-generated content is a classic case of a utopia that is also a dystopia. It’s messy, it’s beautiful, and it’s already here.
Currently there are two ways to produce audio or video that resembles the real world. The first is to use cameras and microphones to record a moment in time, such as the original Moon landing. The second is to leverage human talent, often at great expense, to commission a facsimile, in this example by hiring a photo illustrator to carefully craft Neil Armstrong’s lunar gambol. (If Armstrong never landed on the Moon, you’d have to use this second alternative.) Machine learning algorithms now offer a third option, by letting anyone with a modicum of technical knowledge algorithmically remix existing content to generate new material.
At first, deep-learning-generated content wasn’t geared toward photorealism. Google’s Deep Dreams, released in 2015, was an early example of using deep learning to crank out psychedelic landscapes and many-eyed grotesques. In 2016, a popular photo editing app called Prisma used deep learning to power artistic photo filters, for example turning snapshots into an homage to Mondrian or Munch. The technique underlying Prisma is known as style transfer: take the style of one image (such as The Scream) and apply it to a second shot.
Now the algorithms powering style transfer are gaining precision, signalling the end of the Uncanny Valley—the sense of unease that realistic computer-generated humans typically elicit. In contrast to the previous sweeping and somewhat crude effects, tricks like zebrafication are starting to fill in the Valley’s lower basin. Consider the work from Kavita Bala’s lab at Cornell, where deep learning can infuse one photo’s style, such as a twinkly nighttime ambience, into a snapshot of a drab metropolis—and fool human reviewers into thinking the composite place is real. Inspired by the potential of artificial intelligence to discern aesthetic qualities, Bala cofounded a company called Grokstyle around this idea. Say you admired the throw pillows on a friend’s couch or a magazine spread caught your eye. Feed Grokstyle’s algorithm an image, and it will surface similar objects with that look.
“What I like about these technologies is they are democratizing design and style,” Bala says. “I’m a technologist—I appreciate beauty and style but can’t produce it worth a damn. So this work makes it available to me. And there’s a joy in making it available to others, so people can play with beauty. Just because we are not gifted on this certain axis doesn’t mean we have to live in a dreary land.”
At Adobe, machine learning has been a part of the company’s creative products for well over a decade, but only recently has AI started to solve new classes of longstanding problems. In October engineers at Sensei, the company’s AI research lab, showed off a prospective video editing tool called Adobe Cloak, which allows its user to seamlessly remove, say, a lamppost from a video clip—a task that would ordinarily be excruciating for an experienced human editor. Another experiment, called Project Puppetron, applies an artistic style to a video in real time. For example, it can take a live feed of a person and render him as a chatty bronze statue or a hand-drawn cartoon. “People can basically do a performance in front of a web cam or any camera and turn that into animation, in real time,” says Jon Brandt, senior principal scientist and director of Adobe Research. (Sensei’s experiments don’t always turn into commercial products.)
Machine learning makes these projects possible because it can understand the parts of a face or the difference between foreground and background better than previous approaches in computer vision. Sensei’s tools let artists work with concepts, rather than the raw material. “Photoshop is great at manipulating pixels, but what people are trying to do is manipulate the content that is represented by the pixels,” Brandt explains.
That’s a good thing. When artists no longer waste their time wrangling individual dots on a screen, their productivity increases, and perhaps also their ingenuity, says Brandt. “I am excited about the possibility of new art forms emerging, which I expect will be coming.”
But it’s not hard to see how this creative explosion could all go very wrong. For Yuanshun Yao, a University of Chicago graduate student, it was a fake video that set him on his recent project probing some of the dangers of machine learning. He had hit play on a recent clip of an AI-generated, very real-looking Barack Obama giving a speech, and got to thinking: Could he do a similar thing with text?
A text composition needs to be nearly perfect to deceive most readers, so he started with a forgiving target, fake online reviews for platforms like Yelp or Amazon. A review can be just a few sentences long, and readers don’t expect high-quality writing. So he and his colleagues designed a neural network that spat out three-to-five-sentence, Yelp-style blurbs. Out came a bank of reviews that declared such things as, “Our favorite spot for sure!” and “I went with my brother and we had the vegetarian pasta and it was delicious.” He asked humans to then guess whether they were real or fake, and sure enough, the humans were often fooled.
With fake reviews costing around $ 10 to $ 50 each from microwork services, Yao figured it was just a matter of time before a motivated engineer tried to automate the process. (He also explored using neural nets that can defend a platform against fake content, with some success.) “As far as we know there are not any such systems, yet,” Yao says. “But maybe in five or ten years, we will be surrounded by AI-generated stuff.” His next target? Generating convincing news articles.
Progress on videos may move faster. Hany Farid, an expert at detecting fake photos and videos and a professor at Dartmouth, worries about how fast viral content spreads, and how slow the verification process is. Farid imagines a near future in which a fake video of President Trump ordering the total nuclear annihilation of North Korea goes viral and incites panic, like a recast War of the Worlds for the AI era. “I try not to make hysterical predictions, but I don’t think this is far-fetched,” he says. “This is in the realm of what’s possible today.”
Fake Trump speeches are already circulating on the internet, a product of Lyrebird, the voice synthesis startup—though in the audio clips the company has shared with the public, Trump keeps his finger off the button, limiting himself to praising Lyrebird. Jose Sotelo, the company’s cofounder and CEO, argues that the technology is inevitable, so he and his colleagues might as well be the ones to do it, with ethical guidelines in place. He believes that the best defense, for now, is raising awareness of what machine learning is capable of. “If you were to see a picture of me on the moon, you would think it’s probably some image editing software,” Sotelo says. “But if you hear convincing audio of your best friend saying bad things about you, you might get worried. It’s a really new technology and a really challenging problem.”
Likely nothing can stop the coming wave of AI-generated content—if we even wanted to. At its worst, scammers and political operatives will deploy machine learning algorithms to generate untold volumes of misinformation. Because social networks selectively transmit the most attention-grabbing content, these systems’ output will evolve to be maximally likeable, clickable, and shareable.
But at its best, AI-generated content is likely to heal our social fabric in as many ways as it may rend it. Sotelo of Lyrebird dreams of how his company’s technology could restore speech to people who have lost their voice to diseases such as ALS or cancer. That horse-to-zebra video out of Berkeley? It was a side effect of work to improve how we train self-driving cars. Often, driving software is trained in virtual environments first, but a world like Grand Theft Auto only roughly resembles reality. The zebrafication algorithm was designed to shrink the distance between the virtual environment and the real world, ultimately making self-driving cars safer.
These are the two edges of the AI sword. As it improves, it mimics human actions more and more closely. Eventually, it has no choice but to become all too human: capable of good and evil in equal measure.
At Oracle Open World the company introduced “Intelligent Cloud Applications” that enable enterprise cloud applications to learn as they process new data.
iCarbonX, Anki, CARMAT, Arago, CloudMinds, Zero Zero Robotics, Preferred Networks, Inc., CustomerMatrix, Ozlo, and Scaled Inference are the top 10 Artifical Intelligence startups based on an analysis of CrunchBase data today.