The innovation of core technologies such as deep learning and brain-like intelligent computing has driven a new wave of global machine learning and promoted the advent of the “big data + depth model†era. At the same time, the strides made in the areas related to artificial intelligence and human-computer interaction have also greatly promoted breakthroughs in image, speech recognition, natural language processing, and other "visual, listening, and speaking" cutting-edge technologies.
Let the machine think like a human being and reach or even surpass the human intelligence level. It has always been the goal pursued by mankind. Since the concept of artificial intelligence emerged in 1956, humanity has used the results of research, exploration, and development of this frontier discipline to simulate, extend, and extend the theory, methods, techniques, and applications of artificial intelligence. After nearly 60 years of development, artificial intelligence has developed a variety of methods and technologies such as knowledge representation, machine learning, intelligent search, natural language understanding, reasoning planning, pattern recognition, neural networks, computer vision, intelligent robots, and automatic program design. And has gradually been widely used.
In recent years, breakthroughs in artificial intelligence core technologies represented by deep learning and brain-like intelligent computing, and the rapid development of cloud computing and big data have enabled the ability and application of cutting-edge technologies such as image recognition, speech recognition, and natural language processing. It has been greatly promoted and thus has received much attention from the industrial and scientific communities.
Deep learning and technological innovation of brain-like intelligence calculation
Deep learning is a new area in the study of artificial intelligence machine learning. It was selected by MIT Scientific Review magazine as the top ten breakthrough technology of 2013. This technique was first proposed by Hinton et al. in the "Science" magazine in 2006. It originated from the research of artificial intelligence neural networks. Its motivation lies in establishing and simulating the human brain to analyze and learn the neural network and mimicking the mechanism of the human brain. Explain data such as images, voice, and text.
The essential idea of ​​deep learning is to construct multiple neuron layers. Each layer extracts certain features and information. By combining the underlying features to form more abstract high-level layers to represent attributes, categories, or features, the distributed features of the data can be discovered. Taking image recognition as an example, the first layer extracts boundary information, the second layer extracts boundary contour information, and then the contours can be combined into sub-parts, and the sub-parts are combined into objects. Conversely, features can be extracted from the object layer by layer, and different types of features or attributes can be used to determine which type of object is in the picture.
Since 2006, deep learning has become the focus of artificial intelligence research. Stanford University, New York University, and the University of Montreal in Canada have become the focus of deep learning. In 2010, the US Department of Defense’s DARPA program funded deep learning projects. Participants included Stanford University, New York University, and the NEC American Institute. Subsequently, Google, Microsoft, Baidu, and other high-tech companies with big data competed for resources and occupied deep learning. Technical commanding heights.
Brain-like intelligent computing is one of the important research directions of human brain reverse engineering based on biological methods in the field of artificial intelligence, and its core purpose is to resolve the human brain thinking mechanism. With the development of human brain science, brain-like intelligent computing research has made robots and artificial intelligence research from behavioral simulation to brain neuron simulation. At this point, artificial intelligence research will no longer be a guess on how the human brain works. Instead, it begins with the simulation of the morphology and activity of a single neuron, and gradually builds a micro-loop of neurons, brain regions, and the whole brain.
At present, the European Union has put forward the blue brain plan and the human brain plan. The United States established a brain research plan in 2013 and will invest 3 billion U.S. dollars in 10 years. In recent years, Qualcomm and IBM have successively launched Zeroth and Synapse, brain-like learning chip architectures. The Institute of Automation of the Chinese Academy of Sciences also initiated human-like brain engineering research, conducted relevant theoretical explorations based on cognitive neuroscience and brain network research, simulated real human brain neural networks, and constructed brain-neural computing chips and systems.
In China, Baidu’s chief scientist and master of machine learning, Wu Enda, is responsible for Baidu Research Institute. Through in-depth learning, it simulates the neurons of the human brain and builds the world’s largest in-depth neural network with a parameter scale of more than 10 billion yuan and complex parameters. The mathematical model used by almost any existing multimedia field can handle hundreds of billions of feature vectors. Baidu's deep learning technology has been applied to voice, image, text recognition, natural language processing and other products. At the same time, Baidu is the first company in the world to use GPUs for artificial intelligence and deep learning. Compared with common CPU servers, Baidu's efficiency has increased by more than 30 times, it has improved computing capabilities, and it has dealt with massive training data.
Big Data Drives Breakthroughs in "View, Listen, Say" Technology
The development of deep learning and brain-like intelligent computing technology has brought about a new wave of machine learning, promoting the advent of the "big data + depth model" era, and making great strides in artificial intelligence and human-computer interaction, promoting image recognition, speech recognition, and natural language Dealing with breakthroughs such as "visual, listening, and speaking" cutting-edge technologies.
Image recognition technology is the earliest attempted field of deep learning. The famous artificial intelligence scholar and New York University professor Yann LeCun (Yan Lekun) and others invented the Convolutional Neural Networks (CNN) in the late 1980s. It is a kind of convolutional structure. Deep neural network. The CNN's structure is inspired by the well-known Hubel-Wiesel biological vision model, and in particular simulates the behavior of the Simple Cell and Complex Cell in the visual cortex V1 and V2. CNN has achieved the best results in the world at the time on small-scale issues such as handwritten digits, but it has not been a great success. The main reason is that CNN is not effective on large-scale images. Until October 2012, Hinton et al. used deeper CNN to achieve the best results of the world at the time on the well-known ImageNet problem, which resulted in a significant increase in image recognition. The main reason was that the algorithm was improved, for example, by preventing Overfitting, more importantly, is the increase in computing power brought by the GPU and more training data.
It is understood that at the end of 2012, Baidu successfully applied deep learning technology to natural image OCR recognition and face recognition, and launched corresponding desktop and mobile search products. In 2013, the deep learning model was successfully applied to image recognition and understanding. The error rate was reduced by 30%, and the accuracy of face verification was over 98%. According to Baidu's experience, the application of deep learning to image recognition can not only greatly improve accuracy, but also avoid the time consumption of artificial feature extraction, thereby greatly improving the efficiency of online computing. In the future, deep learning will replace the “artificial features + machine learning†method and gradually become the mainstream image recognition method.
For a long time, the statistical probability model represented by the Gaussian Mixture Model (GMM) has always played a monopoly position in the application of speech recognition. It is essentially a shallow network modeling and can not fully describe the speech features. State space distribution, feature dimension is generally tens of dimensions. With the use of deep neural networks, humans can fully describe the correlation between features, and can combine multiple consecutive frames of speech features together to form a high-dimensional feature. The final deep neural network can be simulated using high-dimensional feature training. Because the deep neural network uses multi-layered results that simulate the human brain, humans can extract information features step by step, and eventually form more ideal features suitable for pattern classification.
This kind of multi-layer structure and the human brain have a lot of similarities when dealing with voice image information. The deep neural network modeling technology can seamlessly integrate with traditional speech recognition technology when it is used in actual online services. It can greatly increase the recognition rate of speech recognition systems without causing any extra cost for the system, and thus thoroughly Change the original technical framework of speech recognition. For example, Baidu launched a voice search system based on deep neural network technology in November 2012 and became one of the earliest companies to use deep neural network technology for commercial voice services.
In addition to voice and images, Natural Language Processing (NLP) is also a technical field in which deep learning plays a role. After decades of development, statistical-based models have become the mainstream of NLP, but artificial neural networks, one of the statistical methods, have received little attention in the NLP field. In 2008, NEC American Research Institute Ronan Collobert and others used embedded methods and a multi-dimensional one-dimensional convolutional structure to study NLP issues and achieved an accuracy comparable to that of the forefront of the business. Recently, Stanford University professor Chris Manning and others have also paid attention to the work of deep learning for NLP. In general, the progress made on deep learning in NLP is not as impressive as in speech images, but there is much room for exploration.
Nowadays, in the era of mobile Internet and big data, a large number of Internet users will generate a large amount of data including texts, images, voice, video, and geographic locations, and the scale will explode. IDC predicts that by 2020, the world will have a total of 44ZB of data. In the face of massive data, emerging machine learning technologies represented by deep learning can do things that traditional artificial intelligence algorithms cannot do. The output results will become more accurate as data processing increases, and better results are achieved. This forms a positive cycle of artificial intelligence.
2V Storage Battery,Stationary Battery 2V 2000Ah,2V Deep Cycle Solar Battery,2000Ah2V Solar Gel Battery
Jiangsu Stark New Energy Co.,Ltd , https://www.stark-newenergy.com