Recently, I saw the activity of “Deploying TensorFlow Model Reasoning Function with Serverless Architecture”, and I am very interested in Serverless. With the mentality of learning, I initially explored two Serverless frameworks, one is OpenFaaS, the other is Tencent Cloud. Through the actual use and comparison of the initial entry Serverless.
OpenFaaS
Follow the documentation to deploy the framework on Ubuntu 20.04.
Then create the Python function:
def handle(req):
print("Hello! You said: " + req)
Copy the code
To modify the configuration, write the docker Hub account.
Version: 1.0 Provider: name: OpenFAas gateway: http://127.0.0.1:8080 functions: Pycon: lang: python3 Handler: /pycon image: >>> DockerHub user name <<</pyconCopy the code
OpenFaaS provides a deployment tool called faas -CLI. Faas -CLI will first upload the image to the corresponding Docker Hub account name and then drop it down to the OpenFaaS service.
After the deployment is successful, you can view the newly created function in 127.0.0.1:8080/ UI/on the Web UI.
Testing:
╰ ─ ➤ curl localhost: 8080 / function/pycon - d "Hi" Hello! You said: HiCopy the code
As can be seen from the above example:
- Developers just need to write event-handling functions, modify configuration files, and confirm deployment without knowing the server infrastructure or even which Web framework the code is actually deployed in.
- The FaaS service returns the invocation interface.
Image recognition service will be deployed to Tencent cloud
In addition to building the Serverless business on hardware and containers (for example, OpenFaaS uses Docker), there is an emerging approach: use application-specific virtual machines, such as WebAssembly (Wasm).
This example uses the Serverless Wasm VIRTUAL machine (SSVM) of Second State to compile the image recognition service code written by Rust into a. So file and upload it to FaaS of Tencent Cloud through the Serverless tool.
After deploying the demo according to the Second State, enter SLS deploy in the root directory of the project to verify the Tencent cloud account. The deployment is successful in about 100 seconds. Check the console of Tencent Cloud to see the functions just deployed.
Testing:
The magic change
Learn the usage of Tencent Cloud Serverless by modifying the example of Second State.
First understand the structure of 0700-tensorflow-SCF:
The cos, Layer, and SCF directories all have serveress.yml. When SLS deploy is executed, you can see that the files in these directories are packaged and uploaded.
Run ssvmup build –enable-ext –enable-aot to generate PKG /scf.so and copy it to the SCF/directory.
SCF/Bootstrap is a script that runs as a service process.
The core command is as follows, where “$_HANDLER” is scf.so
RESPONSE=$(LD_LIBRARY_PATH=/opt /opt/ssvm-tensorflow "$_HANDLER" <<< "$EVENT_DATA")
Copy the code
This means we can run “$_HANDLER” locally. We can debug business functions locally first.
Ssvm-tensorflow needs to be compiled first, or you can download the binary and run it.
After compiling, migrate the demo code to 0700-tensorflow-scf/SRC /main.rs.
use std::io::{self, Read};
use ssvm_tensorflow_interface;
use serde::Deserialize;
fn search_vec(vector: &Vec<f32>, labels: &VecThe < &str>, value: &f32) - > (i32.String) {
for (i, f) in vector.iter().enumerate() {
if f == value {
return (i as i32, labels[i].to_owned()); }}return (-1."Unclassified".to_owned());
}
fn main() {
let model_data: &[u8] = include_bytes! ("Mobilenet_v2_1. 4 _224_frozen. Pb");
let labels = include_str!("imagenet_slim_labels.txt");
let label_lines : VecThe < &str> = labels.lines().collect();
let mut buffer = String::new();
io::stdin().read_to_string(&mut buffer).expect("Error reading from STDIN");
let obj: FaasInput = serde_json::from_str(&buffer).unwrap();
let img_buf = base64::decode_config(&(obj.body), base64::STANDARD).unwrap();
let flat_img = ssvm_tensorflow_interface::load_jpg_image_to_rgb32f(&img_buf, 224.224);
let mut session = ssvm_tensorflow_interface::Session::new(model_data, ssvm_tensorflow_interface::ModelType::TensorFlow);
session.add_input("input", &flat_img, &[1.224.224.3])
.add_output("MobilenetV2/Predictions/Softmax")
.run();
let res_vec: Vec<f32> = session.get_output("MobilenetV2/Predictions/Softmax");
let mut sorted_vec = res_vec.clone();
sorted_vec.sort_by(|a, b| b.partial_cmp(a).unwrap());
let top1 = sorted_vec[0];
let top2 = sorted_vec[1];
let top3 = sorted_vec[2];
let r1 = search_vec(&res_vec, &label_lines, &top1);
let r2 = search_vec(&res_vec, &label_lines, &top2);
let r3 = search_vec(&res_vec, &label_lines, &top3);
println!("{}: {:.2}%\n{}: {:.2}%\n{}: {:.2}%"
, r1.1, top1 * 100.0
, r2.1, top2 * 100.0
, r3.1, top3 * 100.0
);
}
#[derive(Deserialize, Debug)]
struct FaasInput {
body: String
}
Copy the code
Testing:
Outputs the top three possible outcomes.
Json is used to emulate request data by placing image data base64 after “body”.
Finally re-deploy SLS online:
conclusion
Through OpenFaaS and Tencent Cloud Serverless services, this paper has a preliminary understanding of the process of business deployment to the cloud platform. Through the tools provided by FaaS service providers, users can avoid the direct operation of Docker, or setting the environment variables of script running and other unimportant details, so as to focus on business development.
One More Thing
Immediately experience Tencent Cloud Serverless Demo and receive Serverless new user package 👉 Serverless /start
Welcome to: Serverless Chinese!