2012年12月17日星期一

How to Develop Artificial Intelligence


  Direction is more important than endeavor. Just in the area of artificial intelligence, we encounter the direction problem---how should we develop AI? What should we do to enable the AI more intelligent?

  Of labs all over the world, plenty of scientists and engineers are focusing their effort on the area of AI, which consists of a long list of subareas, such as image recognition, nature language understanding, meta-heuristic algorithm. As a result, numerous Avogadro algorithms have been proposed in many papers. Unfortunately, none of these algorithms has a sense of beauty, neither in terms of form nor content. This frustrating result makes us to reconsider the way we are driving on, and it comes to the original question---who has built this improper way we are now driving on? Is it our scientific common sense which is summoned up from the development of other subjects, such as mechanical engineering and electronic engineering? These subjects are propelled by some great people, such as Newton and Maxwell, and built on the basis of a few brief laws. In contrast, the AI area has been developed without any quantized rules and laws if we count Isaac Asimov’s 'three laws of robotics' as none-quantized ones.
Are quantized laws essential to AI? This is a tough problem. Many of those who possess perfect intelligence know little about math. However, without quantized laws, should we develop the system only by textual description?

   Despite those chaotic discussions, I believe that some brief framework should be prebuilt in the process of AI development. There are three of them: the idea of probability-based designiteration-based design and big-data-based design.
Probability-based designing holds the point that all the logic in the AI world is not as certain as that in other engineering and scientific area. None of those “facts” in the AI world is without doubt. There is no rigid and strict derivation as every derivation is attached with a fiducially probability. For example, if AI recognizes the characters in a paper, it may output an “a” or an “α”, the former with the fiducially probability of 80% and the later with the fiducially probability of 18% and others with the fiducially probability of 2%. In some extreme condition, the recognition is definitive and we refer it as 100% for simplicity. However the thought of Probability-based designing should be considered before the simplification. Another example comes from the doubt against physics laws. AI should set every acknowledged physics rule a fiducially probability, with rules such as Newton’s three laws of mechanics be prescribed with considerable high fiducially probability, i.e. 99.9999%,  while the black hole theory be set with some low fiducially probability, such as 80%. All of the derivation in AI world is uncertain, resulting from the fact that none of those “facts” or the rules the derivation process depends on is certain. In total, the AI world is built on uncertainty. This design will bring about extreme complications, while what we can make for sure is that only by this way can we really make AI more intelligent. People tend to doubt what he sees, and can either get some breakthrough or make mistakes in some really simple situations. Essentially, humans are “designed” according to the probability-based model, which differentiates us from the rigid computer programs.

   The probability-based model is a unique way to make AI intelligent. However the extreme complexity of computing and the chaos effect will lead to the ineffectiveness of the model. Then iteration-based designing can be taken into consideration. Iteration can reduce the uncertainty and then rule out some improper hypothesis. As a result, it will reduce the computing complexity. In fact, iteration is a process of verifying the rightness of a probability model. For example, if AI recognizes a 3D object in a series of pictures, it may suppose the object is a desk. As the iteration begins, the AI renders a 3D desk and takes a snapshot of it with a proper perspective and then compares it with the 3D object in those pictures, continuously correcting the model until the difference between them can be ignored. Although some people may regard the Iteration-based designing as an extension of the probability-based designing, it actually is a basic design of AI development, for it enables us to realize that the system built on the only basis of probability is fragile and unachievable. From the perspective of control theory, the iteration makes the derivation a closed loop, enabling the system steadier and more accurate than an open loop.

    Big-data-based design denies the solution of problems in AI area, including nature language understanding and object recognition, by simple algorithm.
On the one hand, after decades of efforts, the academic circle has to admit that the nature language cannot be fully expressed with a few brief formulas. On the other hand, Google has taken advantage of a great amount of raw language materials in its translation service and it can output the major meaning of a foreign language text. This suggests that big data is important to AI development although the method cannot somehow ensure high quality translation. What’s more, in some area, the big data is more important than algorithm, because abundant raw materials consist of all the meta-element of languages and may indicate users’ language habits. Maybe this conclusion frustrates many scholars and engineers who are eager to develop a universal and brief algorithm to shed light to the dark AI area. However, it’s the truth, though an upset one.

1 条评论:

  1. Good point~
    I have a question....how can iteration reduce the computing complexity?

    回复删除