America, a leading investor in tech in general, has delved deep into the artificial intelligence field. In just the San Francisco bay area alone has attracted 41 percent of global investment in the last five years. On the opposite coast the culture of investment into artificial intelligence is equally as strong. In the northeast, large corporations and universities are coupling to foster creative solutions to the world’s problems. For example, in Boston, MIT and IBM are partnering to improve IBM’s Watson technology. Watson works to answer questions posed to it in common language, by instantaneously compiling millions results from the internet. It is best known for when it beat Ken Jenkins in a game of Jeopardy, by quickly searching the questions posed by Alex Trebek to calculate the most likely answer. This was used as a test case to show how quickly and accurately Watson could solve problems, and put IBM a step ahead of its competitors at Google Home, Amazon Alexa, and others with this large scale publicity. Similarly in New York, high-tech AI companies have chosen the metropolitan hub as the home for their tech conferences focused on AI, and more importantly headquarters for their R&D operations due to the highly skilled and networked workforce.This segways nicely into a SWOT analysis of the American AI industry. Looking at the strengths that America has in the AI sector what comes to mind first is the highly trained workforce it bolsters. American engineers are some of the most sought after in the world and some of the most experienced when it comes to artificial intelligence development. Another strength of the American AI industry is the developed economy as a whole; As will be discussed in more depth shortly, large American corporations are investing billions into AI development, and sparking innovations in the space that are unmatched across the globe. Despite these strengths in the industry America does have some weaknesses in this highly technical space. Firstly, American wages are quite high, causing smaller start-ups to struggle in basing operation in the US. It is also a non-centralized economy, although this leads to economic opportunity for people around the world, it divides the workforce, causing engineers at different companies to endeavor in the same areas and create inefficiencies in total labor. As discussed the US has one of the highest opportunities for success in the AI arena. Over half of all AI investments globally are in the US, and companies are already rolling out effective products with AI integration. The threats that the US faces in this space are immense. First and foremost are America’s enemies. Countries like China and Russia who are leaders in the space have an interest in sabotaging American operations. Beyond the fact that such nations may use their own AI technologies to compromise American infrastructure, as seen from previous international technology hacks, these countries may try and steal American intellectual property in such a lucrative space. Beyond the threat from abroad, American competition is another threat the nation must face. In such a fast-paced world, companies are looking to roll out their AI integrated products as fast as possible. In order to surpass and beat the competition it’s possible an American company will release an AI device that isn’t ready to hit the market. With a technology as impactful as AI the slightest flaw in a system could have monumental and mortal consequences on not only Americans but the entire world.
Google, one of the largest tech companies in the world, based in America, is also amassing huge investments into artificial intelligence research. In 2017, Google CEO Sundar Pachai announced the formation of Google.ai, an artificial intelligence branch of the company. One of the most important and applicable uses of the technology Google.ai has developed is in the healthcare industry. This technologies true value is that it can take tasks that doctors are forced to do manually thousands of times a day, like reading a PET or CT scan, and automate the process. An article published in Quartz Media outlined the breakthroughs that Google’s AI has had in the field of neurology. Previously, a neurologist looking to create a 3D model of the neuron-synapse landscape of the brain would have to manually scan through thousands of 2D splice images to create such a model. According to the article, “Google estimates it would have taken 100,000 hours to label the entire sample, which was only a 1mm cube. The AI trained for seven days to be able to accomplish the same task”. Google has been able to teach its computer how to automate the process of combining these thousands of images to create a complete picture. Being able to streamline such a process is extremely important for doctors and researchers trying to tailor treatments from these results. It would take a team of ten researchers over four years and five months to create such an image working forty hour work weeks, when Google’s AI could learn and complete the entire process in a week. Google is currently working on compiling these 3D images over time to trace neuro-activity, and pinpoint critical neuron-synapse connections in certain functions. Artificial intelligence has the potential to enhance medical care exponentially. Diagnosis, effective treatment plans and more can be effectively analyzed and streamlined by the use of AI to create a more effective and efficient medical care landscape.
Google’s artificial intelligence ambitions stretch far beyond just the healthcare sector. One of the largest aims of any artificial intelligence pursuits is to create a system capable of abstract thought suitable for complex problem solving. Deepmind, a company owned by Google, is looking to develop an AI capable of passing difficult problem solving examinations. Deepmind attempted the test known as “Raven’s progressive matrices”, attaining a score of 63%. The test is composed of a series of shapes with different shadings and patterns, followed by a blank space where the respondent must complete the pattern. Being able to do so may seem simple enough, but it requires complex reasoning skills that prior technologies weren’t able to attain. By showing these skill sets, even in the simplest form is extremely promising. It has given researchers at Deepmind hope that their tech can eventually use outside-the-box problem solving skills to solve some of the most complex and troubling scientific problems our world is facing. Researchers at and connected to Google are looking to push technological cognizance to new levels in order to improve their market positioning and quality of products and services offered.
Facebook, another large player in the American Tech landscape, is making investments into AI research. With the conception of FAIR (Facebook Artificial Intelligence Research), the social media giant is looking to incorporate AI into their processes to improve their sites perception and user experience. One of the newer AI projects Facebook has been working on to improve user experience is ExGAN. ExGan, or exemplar generative adversarial network, is a program looking to recreate and fix imperfections on a digital face. FAIR workers are using this kind of learning to help improve the photos users are uploading to their platform. Facebook is looking to create software that can recreate a model of their users’ eyes to recreate an open eyed version of an uploaded photo. According to an article on the pursuit, “Rather than teaching a deep network to recreate eyes from a data set of other people’s eyes, the researchers figure the AI can use photos of the same person as a reference point. And that’s where Facebook has the upper hand — it already has countless photos of you tagged on its servers, ready to cross-reference”. Facebook is looking to capitalize on its large and engaged user base to set itself apart in its development of technologies. By utilizing the millions of photos Facebook already has in its database, researchers can write code to custom make the eyes of each individual user, and apply the technology to the individual user. Other similar services have algorithms that can open your eyes in a photo, but due to their limited photo bases they can only apply general eyes to your photos, and can often turn out with incorrectly shaped or colored eyes for the user. This is where the previously proven value of Facebook gives them a market advantage. In trying to create AI systems that can apply to their products, they often see lower costs of entry and higher flexibility in the space of creation due to their built up data and user base.
Facebook is also looking to create AI centered solutions to some of its most pressing concerns. Facebook, recently, has been taking a lot of heat for its management of false and potentially harmful content. It’s very difficult for a website with millions of daily, public posts to manage applying its user agreement to such a large volume of posts. By equipping their systems with learning capabilities Facebook hopes to create algorithms that can identify hate speech with greater accuracy. For example,
“AI can identify terrorism-linked content with 99 percent accuracy, but hate speech is different. Facebook’s AI flagged hate speech only 38 percent of the time”. Facebook has a long way left towards creating a truly viable AI capable of monitoring all of the platform’s posts. The algorithm is still having difficulty correctly identifying hate speech, the main problem is the colloquial nature of each post. For example, Facebook’s algorithm has trouble differentiating the quotation and criticism of hate speech from the hate speech as well, as at its core the algorithm is somewhat of a keyword search. Another example of the code’s deficiencies is when the algorithm flagged the Declaration of Independence as possible hate speech. Although the document contains a reference to “savage Indians”, it is obviously not the kind of speech that Facebook is trying to remove from their platform. Facebook is trying to refine its algorithm to teach its AI how to properly manage context in its evaluation of hate posts in order to be able to further trust the automation of content management. Facebook like many other AI focused companies must grapple with the problem of teaching their systems how to properly manage context when interacting with human text and dialogue.
Apple is the American Tech giant that was first to incorporate mainstreamed AI systems into their consumer products. Siri is the voice activated search engine of the IPhone. Apple was the first American tech giant to incorporate NLU (natural language understanding) into their products. Siri was revolutionary at the time of its release. This was all prior to the release of competitive systems like Amazon Alexa and Google home. Although Siri was one of the first NLU consumer products on the market, since its integration into the IPhone 4s in 2011, it has slipped in terms of technological revolutionary status, and is now seen as an inferior option to most consumer NLU options. Apple in an attempt to keep up with their competitors poached John Giannandrea from Google where he was serving as the head of Search and AI. Apple is looking to improve Siri in three distinct yet interconnected ways, “deliver better answers to the questions we ask our data, allow us to ask new types of questions, and allow us to query new types of data altogether (e.g., audio, images, and videos as well as words and numbers). By all accounts, this is what Giannandrea is extremely good at”. At its core, Apple is attempting to congeal top talent from across the AI landscape in order to create a more complete and streamlined AI experience. Having a developed NLU based search function incorporated in the most purchased smartphone in the world that can compete with systems like Amazon Alexa would set Apple apart from other smartphone manufacturers.
Amazon is another American goliath looking to integrate AI into their operations. Amazon is not only looking to AI to improve current operations, but also sees it as an opportunity to expand into new sectors. Take Amazon GO for example, this grocery store bolsters AI technologies to create an easy and efficient grocery shopping experience, all without a single on-site employee. Artificially intelligent systems track what items have been taken off the shelves, what items enter the connected shopping carts, and measure analytics associated with shoppers to create a more efficient purchasing system. Beyond that, shoppers simply leave the stores with their baskets, without stopping at a checkout line, and are charged over an app. Besides the fact that this is a more pleasant and efficient system for the shopper it also increases the efficiency for Amazon. They don’t need to pay cashiers, avoid human error, and have access to a quasi-testing site for their technologies. The same systems that are learning how to track the movement of goods within the store are applicable to Amazon’s main enterprise, shipping. Amazon has thousands of enormous warehouses that rely on efficient systems to track which products must be sent out from their online orders. By applying these self-learning technologies in Amazon GO, Amazon is able to improve the quality and efficiency of their main business.
Amazon doesn’t only look to artificial intelligence to branch out into new industries, but also to improve operations in its existing ventures. Amazon’s main revenue earner, its online store, integrates AI technology into almost everything it does. One of the stores most comprehensive and complex features, the recommendation tab, is integrated with AI. According to Forbes, “AI also plays a huge role in Amazon’s recommendation engine, which generates 35% of the company’s revenue”. This represents a huge percentage of overall revenue for the company. By tracking a consumer’s previous purchases, search history, and general trends in consumption across customers, AMazon is able to tailor a customized buying experience for each user. It often seems creepy how accurate Amazon’s recommendations are in showing you possible purchases, but in reality it is just their sophisticated AI hard at work. It is often a concern that too much of our data is being compiled by these companies, but every time you open the internet, you are giving away a segment of your privacy, and Amazon is simply capitalizing on these disclosures, tailoring their sales around your public data, and quite successively so, reflected in the fact that over a third of all sales are generated from the function.