2013年6月13日星期四

Why some area developed so rapidly while others so slowly?

    About fifty years ago, the newspapers had acknowledged the public that in 50 years we would build the marine cities underneath the sea and migrate to Mars. 50 years have gone, the dreams are still far away from the reality. On the other hand, the development of the computer science and telecommunication industry exceeds the folk’s expectations. How does it come?
    For about nearly hundreds years, the property of the constructional material and architecture of buildings have not been considerably changed. On the contrary, the clock speed of chips have been improved thousands times , architectures are much more advanced. If we have built the first moon station, the second one will not be much easier than the first ones, for the reason that much building materials have to be transported to moon from the earth. On the contrary, if we have built the first computer, the second one will be produced as the first one easily. 
    The reason listed above is obvious, and then we consider the question deeper. In fact, the reasons lay in the control theory. The semiconductor industry is self-sustaining while the moon station is hard to maintain self-sustaining. This is to say, the high speed computer would contribute to the designing and manufacturing of powerful chips while bigger moon station will only consume more supply and power. The computer industry, from the hardware to the software, has formed a closed circle system, while building the the seafloor city and moon station is a open circle system. The output of closed circle system with positive feedback is exponential function of the time, which is just as Moore's law. The exponential function owns many advantages comparing with linear or quadratic function, which is probably represented by the moon station construction. The computer world is nearly an autonomous world as it can develop itself starting with only a little material input, which is the silicon and manufacture pipelines. Despite of technology subsystem of computer industry, the finance subsystem of computer industry is also a closed circle. Computer technologies have improved almost every industries and every people, the industries and people contribute continually cash flow to this industry in return. Huge market demand promotes semiconductor manufacturers to produce more powerful and more energy-saving chips.
    Then we raise the question, which industry will be the next autonomous industry in the future? My answer is the artificial intelligence area. The soul of the AI is to let it be an autonomous system. It learns from the books and internet and synthesizes the knowledge; afterwards it will utilize the knowledge to solve our problems. This will be a more closed system with positive feedback than the semiconductor industry.

2012年12月17日星期一

How to Develop Artificial Intelligence


  Direction is more important than endeavor. Just in the area of artificial intelligence, we encounter the direction problem---how should we develop AI? What should we do to enable the AI more intelligent?

  Of labs all over the world, plenty of scientists and engineers are focusing their effort on the area of AI, which consists of a long list of subareas, such as image recognition, nature language understanding, meta-heuristic algorithm. As a result, numerous Avogadro algorithms have been proposed in many papers. Unfortunately, none of these algorithms has a sense of beauty, neither in terms of form nor content. This frustrating result makes us to reconsider the way we are driving on, and it comes to the original question---who has built this improper way we are now driving on? Is it our scientific common sense which is summoned up from the development of other subjects, such as mechanical engineering and electronic engineering? These subjects are propelled by some great people, such as Newton and Maxwell, and built on the basis of a few brief laws. In contrast, the AI area has been developed without any quantized rules and laws if we count Isaac Asimov’s 'three laws of robotics' as none-quantized ones.
Are quantized laws essential to AI? This is a tough problem. Many of those who possess perfect intelligence know little about math. However, without quantized laws, should we develop the system only by textual description?

   Despite those chaotic discussions, I believe that some brief framework should be prebuilt in the process of AI development. There are three of them: the idea of probability-based designiteration-based design and big-data-based design.
Probability-based designing holds the point that all the logic in the AI world is not as certain as that in other engineering and scientific area. None of those “facts” in the AI world is without doubt. There is no rigid and strict derivation as every derivation is attached with a fiducially probability. For example, if AI recognizes the characters in a paper, it may output an “a” or an “α”, the former with the fiducially probability of 80% and the later with the fiducially probability of 18% and others with the fiducially probability of 2%. In some extreme condition, the recognition is definitive and we refer it as 100% for simplicity. However the thought of Probability-based designing should be considered before the simplification. Another example comes from the doubt against physics laws. AI should set every acknowledged physics rule a fiducially probability, with rules such as Newton’s three laws of mechanics be prescribed with considerable high fiducially probability, i.e. 99.9999%,  while the black hole theory be set with some low fiducially probability, such as 80%. All of the derivation in AI world is uncertain, resulting from the fact that none of those “facts” or the rules the derivation process depends on is certain. In total, the AI world is built on uncertainty. This design will bring about extreme complications, while what we can make for sure is that only by this way can we really make AI more intelligent. People tend to doubt what he sees, and can either get some breakthrough or make mistakes in some really simple situations. Essentially, humans are “designed” according to the probability-based model, which differentiates us from the rigid computer programs.

   The probability-based model is a unique way to make AI intelligent. However the extreme complexity of computing and the chaos effect will lead to the ineffectiveness of the model. Then iteration-based designing can be taken into consideration. Iteration can reduce the uncertainty and then rule out some improper hypothesis. As a result, it will reduce the computing complexity. In fact, iteration is a process of verifying the rightness of a probability model. For example, if AI recognizes a 3D object in a series of pictures, it may suppose the object is a desk. As the iteration begins, the AI renders a 3D desk and takes a snapshot of it with a proper perspective and then compares it with the 3D object in those pictures, continuously correcting the model until the difference between them can be ignored. Although some people may regard the Iteration-based designing as an extension of the probability-based designing, it actually is a basic design of AI development, for it enables us to realize that the system built on the only basis of probability is fragile and unachievable. From the perspective of control theory, the iteration makes the derivation a closed loop, enabling the system steadier and more accurate than an open loop.

    Big-data-based design denies the solution of problems in AI area, including nature language understanding and object recognition, by simple algorithm.
On the one hand, after decades of efforts, the academic circle has to admit that the nature language cannot be fully expressed with a few brief formulas. On the other hand, Google has taken advantage of a great amount of raw language materials in its translation service and it can output the major meaning of a foreign language text. This suggests that big data is important to AI development although the method cannot somehow ensure high quality translation. What’s more, in some area, the big data is more important than algorithm, because abundant raw materials consist of all the meta-element of languages and may indicate users’ language habits. Maybe this conclusion frustrates many scholars and engineers who are eager to develop a universal and brief algorithm to shed light to the dark AI area. However, it’s the truth, though an upset one.

2012年10月30日星期二

what's the next programming paradigm?

    Long years ago, we have developed the procedure-oriented programming frame, which is still creativity nowadays. It brings the procedure thought which is considered to be the human's method for solving complex problems to the computer world. Taking advantages of the unimaginable quick tick speed and tremendous storage capacities of computer, we can solve the hardest problems in history in a quit short time with  few lines of c code. On this condition, complex pure computing problems, such as computing hundreds of digits of π, is no longer on the table list of mathematicians and engineers, regardless of professional or amateur.
    While some people are still immersed in the surprise and brilliant world created by the procedure-oriented programming frame,application developers and maintainers are dispirited by the swollen and rigid architecture---the applications usually share some similar properties, how shall we develop a language for the applications to express itself in a universal and simple way? As an inspirational result, we developed the object-oriented language, developers code the computer world by objects and methods. Development and maintain becomes easier.
    However programmers are still engaged much time in the language-level programming. Coding is the tool to solve the realistic problems, whereas the tool is powerless, just like a farmer wants to exploit the abundant oil source beneath 3860 feet with a shovel. In the past decades, some IT companies owning a great mount of  brilliant struggling programmers surprising have finally dig to the destination area, meanwhile in the real world thousands of mines have existed and  much of the mines are even mined in one hundred years ago with tough and poor machines.
    Then comes the question,when does the mining machine will turn out in the programming world? How can we build and control the programming machine instead of digging every tough soil with the blunt shovel? This question should solve by next programming paradigm, which I suppose is the  mechanism-objected programming.
    In my viewpoint, mechanism-oriented programming languages can reduce much language-level workload which is attached to the algorithm and data struct, codes and annotations which subject to a normal programming block are automatically generated by the develop environment.
    Mechanism-oriented programming is not leaded by the paradigm of procedure thought or object thought, it just regards the programming as a product produce process, what we need do is just sending the raw material to the pipeline. Programming is just preparing the "raw material" for the high-speed pipeline, it means that the software-produce comes to the great industry era instead of the handwork era, as a result, the quantity of the software will be much more steady, the price will be extraordinary cheaper. Programmers will focus on the new algorithms and other technologies, including the programming pipeline designing technologies, rather than implementing popular algorithms repeatedly.
   New paradigm may bring out the brightly future of the programming area,also it will rise many new problems, but the "industry process" can't be resisted by any  person ,any companies or any organization, for it just responses for the history we have go through or will go through!