Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”

Today’s share is based on the previous article

1 advanced API testing (analyzing emotional tendency by grasping webpage information by URL)

The upper limit of Baidu emotion analysis API is 2048 bytes. Therefore, if the number of bytes of an article is less than 2048, the direct call exceeds the limit, the text needs to be segmtioned by entering the URL to screen out the web content for emotional tendency analysis

Import requests import JSON from bS4 import BeautifulSoup # Def cut_text(text, lenth) def cut_text(text, Lenth): textArr = re.findall('.{' + str(lenth) + '}', Text.append (text[(textArr) * lenth):]) return textArr # def get_emotion(data): # define baidu API token of emotional analysis value and URL token = '24. Bcc989b57db903cc1189346275b7a372. 2592000.1604971755.282335-22803254' URL = 'https://aip.baidubce.com/rpc/2.0/nlp/v1/sentiment_classify?charset=UTF-8&access_token= {}'. The format if (token) (len(data.encode()) < 2048): new_each = {'text': Data} # store text data in the variable new_each, Dumps (new_each) res=requests. Post (URL,data=new_each) # res_text = Res. text # Save the results of the analysis, save the string format print("content: ", res_text) result = res_text.find('items') # if (result! = 1) : # If it's not equal to -1, Loads (res.text) negative = (json_data['items'][0]['negative_prob']) # positive = Json_data ['items'][0]['positive_prob'] print("positive:",positive) print("negative:",negative) if (positive > negative): # return 2 elif (positive == negative): # return 1 else: Return 0 # else: return 1 else: Print (" print ") data = cut_text(data, 1500) Sum_positive = 0.0 # sum_negative = 0.0 # sum_negative = 0.0 # sum_negative = 0.0 # sum_negative = 0.0 for each in data: Dumps (new_each) res = requests. Post (url, Data =new_each) # use URL request baidu sentiment analysis API res_text = res.text # save analysis results, Save result = res_text.find('items') if (result! = 1) : Json_data = json.loads(res.text) # Positive = (json_data['items'][0]['positive_prob']) # (json_data['items'][0]['negative_prob']) # sum_positive = sum_positive + sum_negative = Print (sum_positive) print(sum_negative) if (sum_positive > sum_negative): Elif (sum_positive == sum_negative): elif (sum_positive == sum_negative): Def get_html(url): headers = {' user-agent ':'Mozilla/5.0(Macintosh; Intel Mac OS X 10_11_4)\ AppleWebKit/537.36(KHTML, Response = requests. Get (url,headers = headers) # BeautifulSoup = BeautifulSoup(HTML, Select ('p') text="" for I in a: text=text+ i.ext return text def main(): Txt1 = get_html (" https://baijiahao.baidu.com/s?id=1680186652532987655&wfr=spider&for=pc ") print (txt1) print (" txt1 test results: ",get_emotion(txt1)) if __name__ == "__main__": main()Copy the code

2. Connect to the database to add, delete, modify, and query the database

Database connection, my application scenario: the URL will exist in the database, determine the url text is positive or negative, steps:

Connect to the database, query the database to obtain the URL address, obtain the text information of the url, judge whether it is negative or positive, and then store the results in the database.

3. Use Java to call Python scripts

There are many ways to call a Python script. You can do this by searching for runtime.getruntime ().exec ().

import java.io.IOException; import java.io.InputStreamReader; public class testPython { public static void main(String[] args){ Process process; try{ process=Runtime.getRuntime().exec("python D:\\Users\\2.py"); BufferedReader in = new BufferedReader(new InputStreamReader((process.getInputStream()))); String line =null; line=in.readLine(); in.close(); process.waitFor(); }catch (IOException e) { e.printStackTrace(); }catch (InterruptedException e){ e.printStackTrace(); }}}Copy the code