Neural networks are making the loudest noises in the AI craze. However, AI is much more than that.

In the field of AI technology, the most money is spent on neural network research. In the eyes of many, neural network technology seems to be the “programmed brain” (though the analogy is not exact).

The concept of neural networks was put forward as early as the 1940s, but until now, people still know little about how neurons and brains work. In recent years, there has been a growing call for technological innovation of neural networks in the scientific community, eager to restart the boom of neural networks…

In addition to neural networks, there are many more interesting, novel, and promising technologies in THE field of AI, and this article introduces them to you.

Knols extract

Knol refers to information units, that is, keywords, words, etc. Knol extraction technology is a process of extracting key information from text. A simple example: For example, the phrase “as the name suggests, an octopus has eight legs” has been extracted to look like this: {” octopus “: {” Number of legs” : 8}}.

Our popular Google search engine relies on this technology, and many of the subsequent technologies will include it as well.

  

Ontology construction


  


Ontology building is an NLP-based technique that aims to use software to build hierarchies of entity nouns, which can go a long way toward enabling AI conversations. Although ontology-building looks simple, it’s actually not easy, mainly because the actual connections between things are more complex than we think.

For example, using NLP to analyze text to build a set of entity relationships:

Example: “my labrador retriever has just had a little puppy, their father is a poodle, so they are a labrador poodle (a mongrel dog)” this sentence was converted, becomes: {” little one “: {” may be” : “labrador poodle”, “has/have (have)” : “father”}, “labrador retriever” : {” have/have “:” puppy “}}

However, human language expression usually does not state all the relations. For example, in this sentence, the fact that “my Labrador is female” can be concluded only through inference, which is the difficulty of ontology construction.

As such, ontology building technology is currently only used in top chatbots.

  

Custom heuristics


  


A heuristic is a rule for classification, usually similar to conditional statements such as “if this item is red” or “if Bob is at home”, often accompanied by an action or decision, such as:

If an attribute of something contains the element “arsenic” :

Its [” poison “] property is True.

For every new piece of information, there are new heuristics and new relationships, and with the establishment of new heuristics, there are new understandings of related nouns. Such as:

Heuristic 1: puppies, which indicates Babies.

Heuristic 2: Babies are very young.

It is inferred from the above two heuristics that both puppies are young.

The difficulty with heuristics is that, for the most part, rules are not as simple as “If/Then”. Statements like “Some people have blond hair” are harder to describe with heuristics. So we have “epistemology” (see below).

  

epistemology


  


Epistemology is a combination of ontology construction and custom heuristics, and adds probability to it, which indicates the possibility that a noun can be associated with any attribute. For example, use this ontology structure:

{‘ people ‘: {‘ gender’ : {‘ male ‘, 0.49 ‘female’ : 0.51}, ‘race’ : {‘ Asian ‘: 0.6,’ African ‘: 0.14}}

To show judgements about a person’s gender and race. Can help identify some has multiple meanings at the same time, the probability of “hybrid” phrases such as “plum like a dozen hormone raisins” in this sentence, because the phrase “dozen hormone” most likely means “large”, thus, it most likely means “plum volume is bigger than raisins.

Implementation of epistemology is much more difficult than ontology construction. First, it needs more data; And, because of the complexity of its structure, it is difficult to set up a database quickly to realize the search after determining the rules; Also, rules are often based on how often something is mentioned in a paragraph, but words don’t necessarily reflect reality.

Epistemology is similar to the theory of “tensor flow” proposed by Asimov. The TensorFlow system of the same name developed by Google is not really based on tensors, whereas epistemology is based on tensors.

  

Automatic gauge technology


  


A measuring system must contain corresponding evaluation criteria. Imagine buying a house with size, location, price and style factors to consider, all of which are not necessarily positive. For example, if you care more about size than price, you’ll pay several times more for a bigger house.

Self-assessment techniques suggest decisions by determining how much weight you attach to different factors. Through this process, it can also predict inventory changes, recommend products, and realize autonomous driving. In other words, automatic gauge technology can perform most of the functions that neural networks can perform, and can make decisions orders of magnitude faster, despite requiring longer training time.

  

Vector difference


  


Vector difference technique is often used for image analysis and time – varying data processing. By constructing abstract vector map of the target, the candidate is compared with the target to be recognized, so as to judge whether it is “the best dating face shape” or “the best buying time”.

Usually, the differences between the target objects are accompanied by a quantitative rule to measure the degree of differences. Through the vectorization of features, some “fuzzy” concepts are simply and clearly expressed.

For humanity, for example, let’s think a symmetrical face more attractive in general, but for the computer, you need to precise calculation, and then, through 30 triangle face abstraction, than by comparison with full face image for operation, can save a lot of computing time and storage space.

Processing of non-image data is also possible. For example, the change of stock price, the ratio of earnings per share and margin, etc., can determine the good or risk degree of an investment by vectorizing these data and comparing them with the ideal value.

  

Matrix convolution


  


Convolution matrices are often used for edge detection and contrast enhancement in the field of image processing. For example, many filters in PhotoShop are based on convolution matrices or superimposed convolution (multiple convolution operations in a specific order).

At the same time, convolution matrix can also be used to process non-image data. For example, when the convolution matrix is used to process timing vectors, the pattern can be quickly found just like edge detection, and then the specific value or range can be found at the minimum or maximum to make a judgment.

  

Multi-view decision making system


  


A decision is not easy to make. Multi-perspective decision making systems make decisions in many ways in a more democratic form.

For example, in the case of the house, your preference for a particular house may be based on incomplete factors, and the subsequent fact that the house is on a cliff (of course, the overwhelming factor may come from the Knol extraction) will erase all previous good feelings and force you to reconsider your decision.

So decisions need to be considered through a broader set of factors, whereas a multi-perspective decision system can measure decisions using two sets of criteria for two people (say, you and your spouse). Multi-view decision-making systems can also be applied to autonomous driving, for example, by gathering the views of 10,000 car owners to create new standards.

  

Write at the end – believe that skill is more than weight


  


Many people only have one tool and fall into the trap of “All I have is a hammer, so everything is a nail.” Recognant and other companies, such as Recognant, are also using some of the relatively obscure technologies in the article as well as neural networks, because, compared to neural network hardware,

The advantage of these software technologies is that they can be adapted and developed for different situations at no additional cost. So the narrower the technology, the more likely it is to get stuck in a situation, and the wider the technology, the easier it is to solve the problem.