“360 AI speaker” will be released soon, and mobile applications are being developed in an orderly manner. This paper will introduce the practice of “360 AI Speaker” mobile application part H5, mainly including:

  • Construction of project environment
  • Interact with Native
  • Custom Chinese font
  • Form input
  • Docker deployment

Introduction to 360 AI speaker application PART H5

The application is mainly divided into four sections:

  1. Content: Music, stories, audio books, etc
  2. Skills: Operating pre-configured speaker commands
  3. Scenario: User – defined speaker instructions
  4. Mine: the user’s smart device, account number, etc

The “skills” and “scenes” sections are made by H5. As shown in the figure below, the skills section mainly includes two pages of instruction list and details for operating back-end pre-configuration.

Note: The image in this article is copyrighted for 360 AI Speakers. Also, because these are screenshots of the design, the app may look different when it’s actually released.

Skills list and details mainly involves using a custom font: Adobe ses song typeface (source.typekit.com/source-han-…). .

Scenarios, in fact, are user-defined skills. The scene is relatively complex, not only involving the display of similar skills, but also involving the addition, deletion, change, and even with the original alarm clock scene.

Construction of project environment

The technical scheme of front-end H5 includes single-page SPA and traditional B/S architecture. Considering the project involves the use of custom font and save custom scene in the middle of the results, so the traditional B/S structure, to avoid loading maximum Web fonts FOIT/FOUT (www.zachleat.com/web/fout-vs)… And take advantage of server-side caching.

Development framework:

  • ThinkJS: 360 excellent Open source server Node.js framework (Thinkjs.org/)
  • Webpack: Front-end build package (webpack.js.org/)
  • Vue.js: SPA Component Development Framework (vuejs.org/)

The code structure of the project is as follows:

Among them,

  • Deploy: is the directory where the deployment script and Docker build script reside
  • Frontend is a directory of front-end resources, including Webpack compiler entry files, template files, JS and CSS resources
  • Runtime: is the ThinkJS runtime directory where configuration information is stored
  • SRC: is the server source directory
  • View: is the server template directory. The template files are compiled and saved by Webpack
  • WWW: is the server static resources directory, such as Webpack JS, CSS, images, fonts, etc

The static resources in the actual project are directly uploaded to the CDN using the Webpack plug-in, and the images are directly referenced to the CDN images, so the server does not save any static files.

Interact with Native

H5 and Native jointly define two interfaces for both parties to call each other.

1. JS Native

/ / JS calls refs, through parameter data to Native SpkNativeBridge. CallNative ({action, params, whoCare})Copy the code

Interface description:

  • SpkNativeBridgeIs an interface object implemented by iOS and Android and injected into a WebView
  • callNativeisSpkNativeBridgeThe method of

Parameter Description:

  • Action: string that you want Native to perform

  • Params: JSON object, data to be passed to Native

  • WhoCare: Value that indicates which end the JS expects to respond to

    • 0: both iOS and Android respond (default)
    • 1: indicates an iOS response
    • 2: Android responds

Return value: agreed upon

2. The Native JS

Spkjsbridage.calljs ({action, params, whoAmI}); // Native calls to pass data to JS spkjsbridage.calljs ({action, params, whoAmI})Copy the code

Interface description:

  • SpkJSBridageJS interface object implemented and exposed in WebView
  • callJSisSpkJSBridageThe method of

Parameter Description:

  • Action: string, the action you want JS to perform

  • Params: JSON object, data to be passed to JS

  • WhoAmI: Value indicating which end is called

    • 1: called by iOS
    • 2: called by Android

Return value: agreed upon

3. Call example

Here are two examples to illustrate how H5 interacts with Native using the above interface. Here is a picture of the situation where the native confirmation box should pop up when exiting the user creation scenario:

As shown, the navigation bar is Native and the WebView is below. The back and save buttons on the navigation bar need to be controlled by H5 based on the scene content. For example, as shown in the figure above, when the user has unsaved content and clicks the back button, H5 needs to tell Native whether it can be returned or whether it needs a prompt. The interaction process is as follows:

Native calls

window.SpkJSBridge.callJS({
  action: "can_back",
  params: {},
  whoAmI: 1/2
})

Copy the code

H5 return value

{
  can: false,
  target: "prev"
}

Copy the code

Return value description:

  • Can: a Boolean value. True indicates that it can be returned. False indicates that the confirmation box needs to be popped
  • Target: string, “prev” returns the previous level, “top” returns the top level, and “closeWeb” closes the previous webView

In other words, the user clicks Native’s “return” button, Native calls JS’s can_back() method, JS determines whether there is any unsaved content, if so, returns the above value, and notifies Native of the confirmation box.

In addition to “Back”, there is a “save” button. H5 is responsible for controlling whether the “Save” button is enabled and what methods the user clicks to invoke after it is enabled. Specifically, after each WebView is loaded, JS determines whether it can be saved or not. For example, the scene in the figure above has only the part of “speaking to speakers” and no “response from speakers”, so it cannot be saved. Therefore, H5 calls the Native displayRightButton() method. Tells Native the text of the button, whether to enable it, and the callback function that the user clicked after enabling it:

Window. SpkNativeBridge. CallNative ({action: "displayRightButton params: {name:" next step ", enable: false, callbackName: "scene_topic_save" }, whoCare: 0 })Copy the code

4. Precautions

If H5 is tuned to Native, only basic data types (strings or values) can be transmitted. JSON objects cannot be transmitted. IOS is fine. To do this, H5 needs to determine whether the WebView environment is Android or iOS, and if the former, convert the JSON object to a string:

// Packing method, Windows. callNative = (param) => {if (speakerWebviewHost === 2) param = json.stringify (param) window.SpkNativeBridge.callNative(param) }Copy the code

In addition, the return value of THE JS method to Native must be returned synchronously. Although Native calls JS asynchronously, if JS returns a Promise, Native cannot process it. Therefore, we need to set the async parameter to false when using the XMLHttpRequest object:

const scene = $.ajax({
    url,
    async: false
}).responseJSON

Copy the code

Finally, there is a “pit” : Android if not open WebView localStorage features, using localStorage H5 pages will be “frozen”!

Custom Chinese font

As mentioned earlier, the “skills” list and details need to use Adobe’s open source “Song Style” font, and native alarm clocks and so on also use this special font:

The titles of “Gameplay introduction” and “feature Introduction” as well as the content of the former need to use custom Chinese fonts. However, the open source font file given by the designer is 23 MB in size and contains over 65,000 characters. Given the limited number of characters that skills and alarms use for this font, we decided to use a font interception technique.

After research, and considering that the skill list needs to be dynamically intercepted, we finally built a font service: strange font library. “Qiziku” provides online dynamic interception service for Chinese fonts, which instantly transforms font files from ten MB to ten KB or several KB. Web Font Loader developed by Adobe and Google (github.com/typekit/web…) The loading script is customized to realize the complete automation of font loading and application.

Currently, the Library includes all of the company’s paid copyrighted fonts for use by various Web or client projects across all lines of business within the company:

In order to obtain the best user experience, “Strange Character library” provides rich interfaces to meet the needs of flexible customization. The server or browser can dynamically intercept fonts through API calls. There are altogether 8 apis in two categories: the first is to obtain font “URL”, including URL uploaded to CDN and Base64 format Data URL; The second type is to obtain the CSS @font-face rule text, including the CDN URL and Data URL content of the CSS.

For example, to get the CDN URL of the font, the API returns the following as an example:

{
  "ttf": "//s3.ssl.qhres.com/static/b73305c8dde4d68e.ttf",
  "woff": "//s1.ssl.qhres.com/static/e702cca6e68ab80a.woff",
  "woff2": "//s1.ssl.qhres.com/static/e27f2a98e5baf04d.woff2",
  "eot": "//s2.ssl.qhres.com/static/590b2e87fb74c9d6.eot"
}

Copy the code

For example, to get the CSS @font-face rule for the font Data URL, the API returns the following:

@font-face { font-family: myWebFont; src: url('data:font/opentype; base64,Fg8AA... 8AAw=='); src: url('data:font/opentype; base64,Fg8AA... 8AAw==? #iefix') format('embedded-opentype'), url('data:font/opentype; base64,d09GM... AAA==') format('woff2'), url('data:font/opentype; base64,d09GR... Ssw==') format('woff'), url('data:font/opentype; base64,AAEAA... //wAD') format('truetype'); }Copy the code

The easiest way to do this is to copy and paste code directly into the web page:

However, since our project has a server, we can deliver the captured font files to the browser along with web pages and CSS, completely avoiding FOUT and achieving the same user experience as using native fonts.

At present, “Qiziku” is only an internal project of 360. It only provides services for the internal company and cannot be used externally. If readers interested in font interception, before you can refer to the author’s article “front-end font interception: actual combat” (mp.weixin.qq.com/s/pq9hXz_iG…). .

Form input

The key point of form input is the componentized input box, which is easy to add and delete. The second is the count of user input, involving composition* events; Third, use debounce to avoid premature counting of user input.

First, to satisfy the user’s need to enter multiple “speaker responses” and custom placeholder text, the input box uses div elements with a contenteditable value of True:

<div class="fieldset"> <div class="placeholder"> Input what you want the speaker to say <small>/ maximum {{lengthLimit}} word </small></div> <div class="input" contenteditable="true"></div> <img class="clearReplyInput" src="//p0.ssl.qhimg.com/d/lisongfeng/icon_close_s.png"> <span  class="countDown">0</span> </div>Copy the code

Furthermore, a front-end component is built based on this element:

import inputComponent from './_replyInputComponent'

Copy the code

Each time you create a new instance of this component, a new input box is automatically added to the DOM:

Add new inputComponent({formsSelector, formTemplate, lengthLimit})Copy the code

Secondly, it counts the number of characters entered with degrees. At this point, three events are used:

  • keyup: Triggered when the user touches a soft keyboard key
  • compositionstart: Triggered when the user invokes the input method to enter a textkeydown
  • compositionend: This event is triggered when the user selects the text to be entered and finishes an input. For example, if the user uses Pinyin or Wubi input method, the previous input such as “Jiang GE Gu Shi” is only “intermediate input”. This event will not be triggered until the user finally selects “tell a story”

Compositionend, however, does not recognize English “group word input”, so it ends up binding the keyUp event:

Bind ('compositionstart', e => {this.placeholder. Hide ()}) Bind ('keyup compositionEnd ', inputHandler)Copy the code

Finally, we use debounce to defer user input events:

Import debounce from 'lodash. Debounce 'const inputHandler = debounce(e=>{//... }, 300).Copy the code

Many people don’t know the difference between Debounce and throttle.

  • debounceIs executed after at least how long an event has stopped firing, such as the one aboveinputHandlerEvents that will be registered with us (keyup compositionend) at least 300 milliseconds apart
  • throttleIt’s a series of events that are triggered in a dense sequence, likescrollorresize, throttling them so that they are triggered evenly every too long time.

Debounce is used here to wrap the actual handler to avoid prematurely counting the input during user input.

Docker deployment

Container deployment provides the advantages of multiple equipment rooms for Dr. Online services are not affected when an equipment room is stopped due to cutting or service removal. Container deployment uses 360 HULK cloud platform’s container-related services Stark:

The Docke deployment process is as follows:

  1. Build Docker images locally
  2. Uploaded to the Stark
  3. Modifying a Container Image
  4. Restart the service

How to use Docker: Get Started and Dockerfile

  • The Get Started:docs.docker.com/get-started…
  • Dockerfile reference:docs.docker.com/engine/refe…

summary

This article is another “quick to the point”, briefly introduces some basic practices in the development process of 360 AI speaker H5, hoping to provide some reference and reference for peers, also welcome everyone’s criticism and correction. In addition, there are some interesting technical points related to the algorithm during the development process, which are also worth sharing. I will share them when I have time to launch the project.