This is the 139th original article without water. If you want to get more original articles, please search the public account to follow us. This article was first published in the front blog of zhengcaiyun: How to land an intelligent robot

preface

With the popularity of intelligent AI, conversational AI products are becoming more common. From the point of product definition, the most fundamental value of intelligent question answering products lies in replacing a lot of repetitive parts of manual work with low cost advantage. Due to the complex business system of our company, the developers spent most of their time dealing with technical support, business side and true line “problems” feedback from students. Some “questions” are so repetitive that they can easily be referred to as FAQs. However, the current situation is that students are still answering similar questions repeatedly, which also takes up a lot of students’ time for ineffective “communication”. Based on the current pain points, we think it is necessary to use intelligent question-answering robot to manage this PART of FAQ. In addition, the intelligent question-answering robot also has a closed-loop online ONCALL question-answering mechanism, which is more convenient to manage the life process of all questions, as well as to facilitate the summary, classification and review of follow-up question data. Realize ONCALL tracking, QA response automation ability. This article will briefly talk about the design and promotion of the intelligent question answering robot “Jarvis” which can be given to students of Zhengcaiyun.

Architecture design

Why was it named “Jarvis”? I believe that the students who like Iron Man know that it is the intelligent assistant of Iron Man, and also the artificial intelligence under the banner of Marvel Comics. Full name: Just A Rather Very Intelligent System. Jarvis’ architecture is as follows:

The whole concept of Jarvis is microservitization. This is easy to expand, convenient to open up any existing capabilities of our company, but also feed the construction and implementation of the existing capabilities of the scene. The advantage of Jarvis is its ability to be easily accessible, smaller and closer to users. Of course, our initial positioning of Jarvis is also simple and clear, providing our students with the ability to solve common QA responses and automation. Here are two parts to introduce Jarvis.

QA response capability

The first part is QA responsiveness. With the “everything you can do with JS ends with JS” attitude, our first QA capabilities were provided by Node-NLP. Why node-NLP? The first reason is that it is very convenient for our front-end partners to use it. The second reason is that we want to quickly explore the possibility of landing and the real use scenario. Thus, our V1.0 version of jarvis was born, and here is a simple implementation:

Jarvis V1.0

Step 1: Build the project

Open your common IDEA, create a new project folder, and create the project structure as shown below:

├── ├─ ├─ ├─ ├─ ├─ ├─ ├─ ├─ index.html ├── ├.jsonCopy the code

In buildable. Js, write the following basic code:

const core = require('@nlpjs/core');
const nlp = require('@nlpjs/nlp');
const langenmin = require('@nlpjs/lang-en-min');
const requestrn = require('@nlpjs/request-rn');

window.nlpjs = { ... core, ... nlp, ... langenmin, ... requestrn };Copy the code

Since we only use NLP core code and a small English language package, just write the following code to your package.json:

{
  "name": "nlpjs-web"."version": "1.0.0"."scripts": {
    "build": "browserify ./buildable.js | terser --compress --mangle > ./dist/bundle.js",},"devDependencies": {
    "@nlpjs/core": "^ 4.14.0"."@nlpjs/lang-en-min": "^ 4.14.0"."@nlpjs/nlp": "^ 4.15.0"."@nlpjs/request-rn": "^ 4.14.3"."browserify": "^ 17.0.0"."terser": "^ 5.3.8"}}Copy the code

We can reference the bundled bundle.js file in the index.html file in advance and write the following code:

<html>
<head>
    <title>NLP in a browser</title>
    <script src='./dist/bundle.js'></script>
    <script>
        const {containerBootstrap, Nlp, LangEn, fs} = window.nlpjs;

        const setupNLP = async corpus => {
            const container = containerBootstrap();
            container.register('fs', fs);
            container.use(Nlp);
            container.use(LangEn);
            const nlp = container.get('nlp');
            nlp.settings.autoSave = false;
            await nlp.addCorpus(corpus);
            nlp.train();
            return nlp;
        };

        const onChatSubmit = nlp= > async event => {
            event.preventDefault();
            const chat = document.getElementById('chat');
            const chatInput = document.getElementById('chatInput');
            chat.innerHTML = chat.innerHTML + `<p>you: ${chatInput.value}</p>`;
            const response = await nlp.process('en', chatInput.value);
            chat.innerHTML = chat.innerHTML + `<p>chatbot: ${response.answer}</p>`;
            chatInput.value = ' ';
        };

        (async() = > {const nlp = await setupNLP('https://raw.githubusercontent.com/jesus-seijas-sp/nlpjs-examples/master/01.quickstart/02.filecorpus/corpus-en.json');
            const chatForm = document.getElementById('chatbotForm');
            chatForm.addEventListener('submit', onChatSubmit(nlp)); }) ();</script>
</head>
<body>
<h1>NLP in a browser</h1>
<div id="chat"></div>
<form id="chatbotForm">
    <input type="text" id="chatInput" />
    <input type="submit" id="chatSubmit" value="send" />
</form>
</body>
</html>
Copy the code

Step 2: Package the project

To package the buildable.js source file into dist/bundle.js, run the following command:

npm run build
Copy the code

Step 3: Run the project

Next, open index.html in your browser, and you should see a simple interactive chatbot, as shown below:

It’s convenient and it smells good, because in three steps, you’ve trained an AI, and it feels great. But consider the lack of security and privacy protections for corpora in browsers, as well as common natural language processing capabilities. For example, it can not provide active learning, entity extraction, personalized demand customization ability, so V1.0 version we are just more to explore the possibility, interested students can follow this article to play a wave.

Jarvis V2.0

In view of the limitations of the first version, when we developed the second version, we cooperated with the AI team by relying on the company’s powerful resources, and the underlying training ability was completely managed by the AI team. For example, the more popular BM25 fast algorithm on the market is used for matching search and BERT is used for semantic analysis to build the network model. We just need to provide the standard training data source and the corresponding landing page to host it. In this way, in a real sense, a safe and reliable intelligent question-answering robot belonging to our company is created.

How do you collaborate with the AI team? We agreed to use the RESTful standard design interface for the underlying communication. As a result, we have more forms of interaction, such as web, mobile, plugins… Jarvis can handle all of these situations. In addition, we have endowed “Jarvis” with more diverse forms of interaction, such as support for likes, clicks, comments… , these feedback data will be used as negative samples for model training later.

For web side setup, we used the popular ChatUI to help us develop. It is a design and development system that serves the field of dialogue, helping to build the framework of intelligent dialogue robot, which can be built in a few simple steps:

Using ChatUI

Write an HTML file (index.html) and run the file (setup.js) :

In index.html, write the following code:

<! DOCTYPEhtml>
<html lang="zh-CN">
  <head>
    <meta name="renderer" content="webkit" />
    <meta name="force-rendering" content="webkit" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
    <meta charset="UTF-8" />
    <meta name="viewport" content="Width =device-width, initial-scale=1.0, user-scalable=0, minimum-scale=1.0, maximum-scale=1.0, viewport-fit=cover" />
    <title>jarvis</title>
    <link rel="stylesheet" href="/ / g.alicdn.com/chatui/sdk-v2/0.2.4/sdk.css">
  </head>
  <body>
    <div id="root"></div>
    <script src="/ / g.alicdn.com/chatui/sdk-v2/0.2.4/sdk.js"></script>
    <script src="/ / g.alicdn.com/chatui/extensions/0.0.7/isv-parser.js"></script>
    <script src="/setup.js"></script>
    <script src="/ / g.alicdn.com/chatui/icons/0.3.0/index.js" async></script>
  </body>
</html>
Copy the code

Write the following code in setup.js:

var bot = new ChatSDK({
  config: {
    navbar: {
      title: 'Smart Assistant'
    },
    robot: {
      avatar: '//gw.alicdn.com/tfs/TB1U7FBiAT2gK0jSZPcXXcKkpXa-108-108.jpg'
    },
    messages: [{type: 'text'.content: {
          text: 'Smart Assistant is at your service. How can I help you? '}}},requests: {
    send: function (msg) {
      if (msg.type === 'text') {
        return {
          url: '//api.server.com/ask'.data: {
            q: msg.content.text } }; }}},handlers: {
    parseResponse: function (res, requestType) {
      if (requestType === 'send' && res.Messages) {
        // Parse ISV message data
        return isvParser({ data: res });
      }

      returnres; }}}); bot.run();Copy the code

Open index.html in your browser and see the following:

More detailed usage documentation can be found on the official website.

Build a nail robot

In addition to the above carrier, we also deeply combined the nail robot and the ability of nail push to do a lot of interesting things. On the one hand, all colleagues in the company use Tins for office work. On the other hand, the immediacy of message push can be better reflected. How do you develop an in-house robot? The following process can build a nail robot simply and quickly:

  1. Log in to dingdingdeveloper background, choose Application Development > Enterprise Internal Development > Robot, click Create application. And improve the relevant configuration robot basic information.
  2. When the user @ the robot, Dingpin will send the message content through the HTTPS service address of the robot developer, and the packet protocol is as follows.
    {
      "Content-Type": "application/json; charset=utf-8"."timestamp": "1577262236757"."sign":"xxxxxxxxxx"
    }
    Copy the code
  3. Developers can choose to reply a message to the group according to their business needs. Currently, five message types are supported: Text, Markdown, whole Forward actionCard, independent forward actionCard, and feedCard. Refer to message types and data formats for details.

Of course, you can also customize moreThe message templateAnd even complete some complex forms of human-computer interaction on the nail to make your robot look cooler. Like this:

  1. Enterprise member usage path: Enter the group where the robot is to be used and click Group Settings > Intelligent Group Assistant > Add Robot, which can be found in the list of enterprise robots.

For more detailed usage documentation, see the article “Developing robots in the Enterprise.” In this way, we have completed the construction of the nail robot. For subsequent use, we need to introduce the enterprise robot into the group manually.

Get through the Pin push

The pin push capability is implemented using ding-bot-SDK:

const Bot = require('ding-bot-sdk')

const bot = new Bot({
  access_token: 'xxx'.// Access_token after the Webhook address // Mandatory
  secret: 'xxx' // Security Settings: signed secret mandatory
})

bot.send({
  "msgtype": "text"."text": {
      "content": "I am who I am, @150XXXXxxxx is a different firework"
  },
  "at": {
      "atMobiles": [
          "150XXXXXXXX"]."isAtAll": false}})Copy the code

With all of this done, our QA response capability will be easier to land with the help of the pin robot push capability.

Automation capability

The second part is “automation capability”. Automation is relative to artificial concepts. It refers to the use of scripts to make controlled objects or processes run automatically and according to predetermined rules without human involvement. Its biggest advantage is that it can save labor and make tedious actions one-click. We only need to arrange the corresponding system process in advance to realize automatic work.

Jarvis uses instructions as a vehicle to trigger the corresponding script. So here we have a thought, how do we distinguish between user input questions and our built-in commands? We did two simple things.

Instruction mode

First, the built-in instructions should be as short as possible. We define them as certain words or characters of limited length, such as ONCALL, on-duty, front-end tabloid and baice, corresponding scripts behind key word triggers.

Of course, in addition to pure instruction scenarios, we also support the ability to retrieve more parameters through interaction and more customization. As follows:

The realization of this part of the ability only needs to determine a parameter identification, through interception can be obtained. Similar to what we defined in NPM install webpack –save as — as an argument. A slightly more complex example is that we can directly tap into the company’s infrastructure Best system to provide us with page detection, as shown below:

Threshold calculation mode

Second, weak matching mode is carried out based on the threshold value. We will detect the problem score based on the user. Only when the problem score is lower than the threshold value, the weak matching mode of the instruction will be carried out and the usage mode of the relevant instruction will be suggested. This is similar to the friendly message that most CLI users get when they misspell a word using scaffolding on the terminal:Here is the pseudo-code implementation:

const THRESHOLD = 0.25;
const questionStr = 'Who's on duty today?';
const instructionMap = [
    {
        instruction: 'on duty'.handler: () = > console.log('Get current person on duty'),}, {instruction: 'oncall'.handler: () = > console.log('Trigger ONCALL related'),},];const { scroe, qaAns } = await getQA(questionStr);
if (scroe > THRESHOLD) {
    return qaAns;
}
// Get the command and the corresponding script action
const [{ instruction, handler }] = instructionMap
    .filter(({ instruction }) = > questionStr.indexOf(instruction) > -1);
return handler();
Copy the code

The result is as follows:

System layout

In addition to the automation of a single system, we can also orchestrate and combine the core logic of multiple systems. For example, The ONCALL capability of “Jarvis” is a solution to online problem alarm and tracking based on the combination of our company’s on-duty system, ICS system, voice system and the built-in work order system of Jarvis.

Combined with jarvis’ own AI ability, jarvis can not only track and manage an ONCALL problem, but also feed the data of ONCALL statement back to QA for training, thus forming the closed-loop ability of ONCALL asking and QA answering.

Question students perspective:

Perspective of students on duty:

The next time a user is asked a similar question, he can simply ask “Jarvis” to get the answer. As far as the user is concerned, all we need to do is use Jarvis, and all the automation behind it is hosted by Jarvis.

Promotion of the ground

First of all, we need to affirm our products and know that the products we make are valuable. We can think from the following perspectives:

  1. What problem was solved
  2. How to solve
  3. Analysis of users and scenarios

Intelligent robot positioning is to help students solve common QA responses and automation capabilities. Understand the value of the product, the next step is to promote the use.

In line with the small to big strategy, we choose to promote from the r & D students around. For r & D students, there are more appeals, such as support script input QA data, support external system one-click access to “Jarvis “… Through continuous research, we have fulfilled some of the “needs”, constantly optimizing the experience and even adding functions, in order to retain these valuable VIP users. Early word of mouth is very important, they can bring more promotion to the product.

The last step is to collect user feedback and use pain points, which can be in the form of questionnaire survey or by collecting buried point data for review. The purpose is to better understand the use of their own products. Can not only do timely leak to fill the gap can also understand the voice of the user. Generally speaking, it is to take a small point as a breakthrough, and constantly explore and discover more possibilities. In this process, we continue to investigate and abstract the ability, and finally confirm it and promote it. It is also very simple to follow it.

Summary and future planning

Jarvis’ system of abilities is shown below:

In the future, Jarvis has many things to do, such as supporting contextual dialogue, enabling more accurate QA capabilities, or opening up more external systems to develop a common set of programming capabilities, or better analyzing real problems encountered by students, solving problems together, and improving happiness. The future hope “Jarvis” can be small and beautiful, clever and strong, all this is not over to continue…… .

Recommended reading

A front-end trainee who has been practicing for 2 years and 8 months

Analysis of VNode and DIff algorithm in Snabbdom

How to use SCSS to achieve one key skin change

Why is index not recommended as key in Vue

Open source works

  • Political cloud front-end tabloid

Open source address www.zoo.team/openweekly/ (wechat communication group on the official website of tabloid)

  • Item selection SKU plug-in

Open source addressGithub.com/zcy-inc/sku…

, recruiting

ZooTeam, a young passionate and creative front-end team, belongs to the PRODUCT R&D department of ZooTeam, based in picturesque Hangzhou. The team now has more than 60 front-end partners, with an average age of 27, and nearly 40% of them are full-stack engineers, no problem in the youth storm group. The members consist of “old” soldiers from Alibaba and NetEase, as well as fresh graduates from Zhejiang University, University of Science and Technology of China, Hangzhou Electric And other universities. In addition to daily business docking, the team also carried out technical exploration and practice in material system, engineering platform, building platform, performance experience, cloud application, data analysis and visualization, promoted and implemented a series of internal technical products, and continued to explore the new boundary of front-end technology system.

If you want to change what’s been bothering you, you want to start bothering you. If you want to change, you’ve been told you need more ideas, but you don’t have a solution. If you want change, you have the power to make it happen, but you don’t need it. If you want to change what you want to accomplish, you need a team to support you, but you don’t have the position to lead people. If you want to change the pace, it will be “5 years and 3 years of experience”; If you want to change the original savvy is good, but there is always a layer of fuzzy window… If you believe in the power of believing, believing that ordinary people can achieve extraordinary things, believing that you can meet a better version of yourself. If you want to be a part of the process of growing a front end team with deep business understanding, sound technology systems, technology value creation, and impact spillover as your business takes off, I think we should talk. Any time, waiting for you to write something and send it to [email protected]