ã€Tianji Home Appliances Channel】Artificial intelligence will be the hot spot behind the Internet+. There are various opinions on artificial intelligence, which we introduced in our subscription number on May 13th. So what exactly is artificial intelligence? How does it generate and operate? See an article from the Economist on May 9th. This article is easy to explain in a simple way and it is worth reading.
Elon Musk is busy building the future of all mankind. He earned the first PayPal in the early days of the Internet development, and later participated in the creation of solar power company "Solar City" to provide green energy for thousands of households; at the same time, he created the famous electric car Tesla. "In addition, the space company "SpaceX" was set up and determined to see human settlement in Mars in its own lifetime. In the eyes of all people, this outstanding technical expert ... is of course infinitely optimistic and optimistic about tomorrow.
However, not all future technologies allow him to feel relieved. When speaking at the Massachusetts Institute of Technology last October, Musk took artificial intelligence
(AI) denounced the "call to the devil", he believes that artificial intelligence is a competitor to human intelligence and will become the greatest threat to humanity. The scientific community did not hold this view alone. Oxford University’s philosopher Nick Bostrom has helped develop an “at-risk†subject that focuses on things that pose a general threat to humans. Bosteren believes that advanced artificial intelligence, like asteroids that strike the earth and a full-scale nuclear war, is a potentially huge threat. Lord Rees, who is the family member of the "Royal Society", the most important scientific institution in the UK, is also the creator of the "risk of being" problem. He is also worried about artificial intelligence.
But in the artificial intelligence field, on the contrary, science and technology personnel are optimistic and open-minded. They are ecstatic about the rapid progress made in recent years. Some artificial intelligence development companies, such as Google, Facebook, Amazon, and Baidu, have entered the "arms race" phase - they have recruited researchers, established laboratories, and acquired startups. In general, in the artificial intelligence industry, people are not worried that they will be overtaken by machines. Their job is not to create a new way of thinking, but only to give some work done by people in the past to the machine.
At present, computers, tablets, and smart phones under the global Internet have formed a flood of data. At the same time, the incredible computing power possessed by machines can handle these data. With new algorithms, machines can understand human languages ​​more and more, and recognize various images. At the same time people are also worried that technology will take away their jobs. Now, the core of a large amount of work is the recognition of models and the translation of symbols. These must be completed by humans. If computers can provide automated alternatives or provide assistance to greatly increase human productivity, they will replace Many people will have more white-collar jobs.
The rush of artificial intelligence is everywhere. Last year, it was rumored that Google had spent $400 million to quietly acquire DeepMind, a startup in London under the nose of Facebook. It had its own artificial intelligence research lab, led by Yann LeCun, a star researcher at New York University. Google also hired Stanford University's artificial intelligence giant Andrew Ng. Baidu later drove Andrew Ng away and asked him to lead his lab in Silicon Valley. There are also companies, such as "Narrative Science" in Chicago, who are developing artificial intelligence for writing. The business weekly Forbes has begun using artificial intelligence to write basic financial reports. "Kensho" in Massachusetts wants to use artificial intelligence to automate part of the financial industry's tasks. Now investors are rushing. On April 13, IBM announced a new use plan for its Watson computer - to analyze people's medical health records and form the doctor's artificial intelligence. Remember this Watson computer? In 2011, it easily won two human championships in a television quiz show.
In fact, research on artificial intelligence is almost at the same time as the development of computers. The current major research direction is a branch of the field of artificial intelligence, called "deep learning" - this is a modern way of "machine learning" - computers accomplish the task of self-learning by computing large data sets. This new algorithm bridges a gap that has long plagued the development of artificial intelligence: the task that is difficult for humans, but simple for computers, and vice versa. For example, when it comes to complex mathematical operations, the simplest computer can also overcome humanity; but if it is a trivial matter that is not worth mentioning to human beings, such as recognizing faces, listening to human speech, and recognizing the photos Objects... For computers, they can be tangled and difficult.
When humans are doing things they find difficult, such as solving differential equations, they have to write a lot of formulas. If you turn these formulas into computer programs, it's very simple. There are things that humans find easy, but there is no set of similar formulas or rules. If you want to create such rules, it is extremely difficult. Take a famous example. Adults will distinguish between pornographic and non-erotic images, but let them describe how they do it... It is almost impossible. In 1964, the then Supreme Court Justice of the United States, Porter Stewart, discovered that he would not be able to achieve a strict legal definition. He was upset with his hands and sighed, although he could not define pornography in abstract terms— —"But when I saw it, I knew it."
Machine learning is to let it generate a set of rules for itself so that we can see what it is - and this set of rules cannot be designed by a programmer. How does the computer generate rules? - Through careful statistical analysis of mass data.
Many current machine learning systems use traditional artificial intelligence techniques—neural networks—to develop the statistical patterns they need. Neural networks are the methods that researchers in the 1950s came up with. Although they do not know what intelligence is, they know that the brain is the birthplace of intelligence. The brain uses information instead of transistors, but neurons. If you can simulate these neurons — transferring electrochemical signals between highly associated cells — then there may be some sort of smart behavior.
Neurons are very complicated. Even today, scientists have simulated neurons very roughly. However, early studies have shown that even the most primitive simulated neural networks can well complete certain tasks. Microsoft's artificial intelligence researcher Chris Bishop pointed out that since 1960, the telephone companies have been using the echo cancellation algorithm discovered by the neural network. However, the early success of the artificial neural network in the field of artificial intelligence quickly lost its appeal because the limited computer computing power at that time limited the scale of the analog network and ultimately limited the further development of the technology. .
In the past few years, the digital computing capabilities of the chip have been greatly improved due to the market demand for drawing video games. Early neural networks could only have dozens or hundreds of neurons and were usually organized into a single plane; while the latest neural networks, such as those used by Google, could simulate billions of neurons - this way Researchers can further imitate the human brain—hierarchical stratification of these neurons (see below). In this kind of communication between the layers, the machine can finally “deeply learnâ€.
Each layer of the artificial neural network is given a different level of abstraction. For example, to process a picture, we first lead the original picture to the bottom. The bottom layer analyzes the brightness and color of each pixel, and the distribution of these pixels in the image. Then, the above information is imported into the upper layer, and this layer of processing forms a more abstract information category, such as dividing edges, distinguishing shadows, etc., and then importing them to the next layer. This layer in turn analyzes the edges and shadows, looking for a combination of features of a particular object, such as eyes, lips, ears... At this point, the combination of these information can be determined - this is or is not an image of a person's face, and can even be judged Is it some face I've seen in the past?
In order for this neural network to be truly useful, it must be trained first. A computer, if it wants to write its own program and engage in facial recognition work, humans have to prepare it with a "training set" containing thousands of images. Some of them have faces and some don't. Humans first Each image is labeled (face or no face). The image is imported as “input†into the computer system, and the label (“face†or “facelessâ€) is used as “output†for computer verification. The task of the computer system is to form a set of statistical rules with correct output and input matching. To do this, the computer system itself will find in each abstraction layer - all features that match the face picture. Once these correlations are matched and counted, the machine can reliably identify all the images in the “training setâ€. The next step is to give it a new set of images to see if the facial recognition rules it extracts are in line with the objective laws of the real world.
Through this bottom-up approach, machine algorithms learn to identify features, concepts, and categories—those that humans can understand but cannot express. However, the use of this algorithm has been confined to a very narrow professional field for a long period of time. Computer programs usually need to get tips from their designers, this kind of prompts are hand-entered by the designers, used to indicate a specific task - this is what you want to look at the image, or you want to hear the voice ... ...
Moreover, early analog neural networks had limited data processing capabilities. Once more than a certain amount of data, you feed it more information is useless. Modern systems do not require so much manual input and adjustment. How much data you have can be lost to it, and it can be used very well. Now because of the popularity of the Internet, the sources and production of data are greatly enriched.
Internet crocodiles, such as Baidu, Google, and Facebook, have huge amounts of data generated by their users—there are emails, search and purchase records, all kinds of all-inclusive photos... all of these are quietly stored In their server. Internet companies know that these data do contain useful information and laws, but the amount of data is daunting and impossible. However, for computers, this is a piece of cake - an excessive amount of information resources, but instead it is transformed into a treasure that can be used by computers to learn and explore the law - particularly good news is that when people create it, they are all Manually tagged (either photos or audio), this is especially useful for computer learning in the future. Coupled with the correct algorithm, the computer uses these overwritten data for self-learning, and finally mines useful patterns, rules, or categories.
The result was astounding! In 2014, Facebook launched a product called “DeepFaceâ€, which can achieve a recognition accuracy of 97% for a given face, even when the face is incomplete or dimly lit. To. Microsoft's object recognition software can even tell you the difference between the two Welsh Corgi dogs. This is the two groups that look almost identical (see the figure below)! Some countries have already used facial recognition technology in frontier defense agencies. There is also a system that can identify and extract individuals from video recordings. Police and security systems are very interested. A report on May 5 said that the U.S. security system uses voice recognition software to print the voice of a call directly into text.
SHENZHEN CHONDEKUAI TECHNOLOGY CO.LTD , https://www.szfourinone.com