The best way to use websocket + TLS protocol is to add robots.txt in the root directory of the website

The robots protocol, also called robots.txt (lowercase), is an ASCII encoded text file stored in the root directory of a website. It usually tells the roaming device of a web search engine (also known as the web spider) which content in the website should not be obtained by the roaming device of the search engine and which content can be obtained by the roaming device. Because urls in some systems are case sensitive, the name of the robots.txt file should be all lowercase. Robots.txt should be placed in the root directory of the website. If you want to define the behavior of a search engine’s wanderer when accessing subdirectories separately, you can either merge your custom Settings into the robots.txt file in the root directory, or use robots Metadata. The following code

User-agent: Baiduspider Disallow: / User-agent: Sosospider Disallow: / User-agent: sogou spider Disallow: / User-agent: YodaoBot Disallow: / User-agent: Googlebot Disallow: / User-agent: Bingbot Disallow: / User-agent: Slurp Disallow: / User-agent: Teoma Disallow: / User-agent: ia_archiver Disallow: / User-agent: twiceler Disallow: / User-agent: MSNBot Disallow: / User-agent: Scrubby Disallow: / User-agent: Robozilla Disallow: / User-agent: Gigabot Disallow: / User-agent: googlebot-image Disallow: / User-agent: googlebot-mobile Disallow: / User-agent: Yahoo-mmcrawler Disallow: / user-agent: Yahoo-blogs /v3.9 Disallow: / user-agent: psbot Disallow: /Copy the code

Save the above code to notepad, name it robots.txt and upload it to the root directory of the website