Complete code please stampxiaoyao1991/manspreading

After reading the headline you might ask, why do I need such a proxy server? Isn’t Geth working fine? Old money I recently encountered such a thing: Since most Geth nodes on the Ethereum network run without changing the default values of some parameters, one of them is maxpeers, which limits how many peers a node can accept. The default value of this parameter is 25, which means that a node can accept a maximum of 25 peers. When the 26th peer attempts to establish a connection with this node, it is rejected. Lao Qian’s research team recently found that the number of peers on most nodes on ethereum’s main network was full, and it was difficult to connect to these full nodes. They had to wait in line for their peer to drop, leaving a vacant seat to connect to them. Soon, if I change the code in Geth and want to build it again, I will have to kill the process and reconnect. If I am unlucky, I will have to wait for someone else to take my seat.

If can have a simple “seat” proxy server, can keep has been connected to a remote node, occupy a seat, peer, behind the proxy server, I can kill me anytime geth process, build afresh, reconnect to the proxy server, don’t worry about the seats have robbed, it couldn’t be better. Lao Qian decided to write such a proxy himself, while reading up on geth code to learn the details of P2P messages in GEth.

This tutorial on the official Ethereum wiki provides a good example as a starting point. The tutorial shows how to implement a P2P Server with a custom protocol. Our proxy is just such a P2P server that needs to simulate a real Geth to interact with remote nodes (upstream nodes). However, we do not need to implement the whole ETH protocol, only need to deal with a few cases (the handshake protocol before establishing a connection with the node, the broadcast when a new block is born), in the rest cases, the proxy server can relay the network packet directly to the upstream and downstream (the downstream node is the GETH node behind the proxy server).

That’s the basic idea. Now let’s look at the code. Let’s start with what structures we need to define:

// statusData is the network packet for the status message. Copy it from go-Ethereum /eth/protocol.go type statusData Struct {ProtocolVersion uint32 NetworkId uint64 TD * big.int CurrentBlock common.Hash GenesisBlock common.Hash } // newBlockData is the network packet for the block propagation Message. // This structure is the format of the network packet in the new broadcast. Type newBlockData struct {Block *types.Block TD * big.int} // This structure encapsulates a peer connection Type conn struct {p *p2p.Peer rw p2p.MsgReadWriter Type proxy struct {lock sync.RWMutex upstreamNode *discover.Node // upstreamNode UpstreamConn *conn // downstreamConn *conn // downstreamState statusData // Basic information about the upstream node, including what the latest block is, What is the latest totalDifficulty SRV *p2p.Server // p2p Server instance}Copy the code

Let’s configure a simple P2P Server:

var pxy *proxy var upstreamUrl = flag.String("upstream", "", "Upstream URL to connect to") var listenAddr = flag.string (" listenAddr ", "127.0.0.1:36666", "Listening addr") func init() {flag.parse ()} func main() { Because each node in the Ethereum network has a unique identity, // enode://<key>@< IP >:<port> _ := crypto.generateKey () fmt.println ("Node Key Generated") // Get the enode tag of the upstream Node from the argument command line. ParseNode(*upstreamUrl) // Create a new proxy instance pxy = &proxy{upstreamNode: Node,} config := p2p. config {PrivateKey: nodekey, MaxPeers: 2, // The proxy server supports a maximum of two peers, one upstream node and one downstream node NoDiscovery: true, DiscoveryV5: false, Name: common.MakeName(fmt.Sprintf("%s/%s", ua, node.ID.String()), ver), BootstrapNodes: []*discover.Node{node}, StaticNodes: []*discover.Node{node}, TrustedNodes: []*discover.Node{Node}, // The protocol supported by the proxy server, here we will create a custom protocol. Protocols: []p2p.Protocol{newManspreadingProtocol()}, ListenAddr: *listenAddr, Logger: Log.new (),} config.logger.sethandler (log.stdouthandler) // Create pxy.srv = &p2p.server {config: Var wg sync.waitGroup Wg.add (2) err := pxy.srv.start () = nil { fmt.Println(err) } wg.Wait() }Copy the code

Now comes the key custom protocol part:

Func newManspreadingProtocol() p2p.protocol {return p2p.protocol {// This part is used to identify the Protocol meta information, use eth's own meta information, otherwise upstream and downstream nodes will not recognize our Protocol Name: eth.ProtocolName, Version: eth.ProtocolVersions[0], Length: Eth.ProtocolLengths[0], // This is the core of the protocol. Every time the agent receives a new message, it runs the handle method. func() interface{} { fmt.Println("Noop: NodeInfo called") return nil }, PeerInfo: func(id discover.NodeID) interface{} { fmt.Println("Noop: PeerInfo called") return nil }, } }Copy the code

Next comes the handle method, the core of the protocol:

func handle(p *p2p.Peer, rw p2p.MsgReadWriter) error { fmt.Println("Run called") for { fmt.Println("Waiting for msg..." Println("Got a MSG from: ", fromWhom(p.id ().string ()))) if err! = nil { fmt.Println("readMsg err: ", err) // If read fails and the error is EOF, If err == io.eof {pxy.lock. lock () if p.id () == pxy.upStreamNode. ID {pxy.upstreamconn =  nil } else { pxy.downstreamConn = nil } pxy.lock.Unlock() } return err } fmt.Println("msg.Code: ", Msg.code) if msg.code == eth.StatusMsg {var myMessage statusData err = msg.decode (&myMessage) if err ! = nil { fmt.Println("decode statusData err: ", err) return err} // If it is a handshake message from the upstream node, we register the upstream node and save its latest status // The most important is Current Block and TD, The downstream node uses these two values to determine whether it needs to synchronize // If the handshake message is sent by the downstream node, Pxy.lock. lock () if p.id () == pxy.upstreamNode.ID {pxy.upstreamState = myMessage pxy.upstreamConn = &conn{p, Rw}} else {pxy.downstreamConn = &conn{p, rw}} pxy.lock.unlock () // send back a handshake packet. Note that the handshake packet sent back is the same as the handshake packet received by the upstream node. Because we got the upstream node "agency" err = p2p. Send (rw, eth. StatusMsg, & statusData {ProtocolVersion: myMessage. ProtocolVersion NetworkId: myMessage.NetworkId, TD: pxy.upstreamState.TD, CurrentBlock: pxy.upstreamState.CurrentBlock, GenesisBlock: myMessage.GenesisBlock, }) if err ! = nil { fmt.Println("handshake err: ", Err) return err}} else if msg.Code == eth.NewBlockMsg {var myMessage newBlockData err = msg.Decode(&myMessage) if err ! = nil { fmt.Println("decode newBlockMsg err: ", err)} // Update the status of the agent to the latest TD and CurrentBlock, So that the downstream node synchronization pxy. Lock. The lock () if p.I D () = = pxy. UpstreamNode. ID {pxy. UpstreamState. CurrentBlock = myMessage. Block. The Hash () Pxy.upstreamstate.td = mymessage.td} pxy.lock.unlock () Size, r, err := RLP.EncodeToReader(myMessage) if err! = nil {fmt.Println("encoding newBlockMsg err: ", err)} Eth.NewBlockMsg, Size: uint32(Size), Payload: r})} else {relay(p, MSG)} return nil}Copy the code

Finally, the method of passing the message package:

Func relay(p * p2p.peer, MSG p2p.msg) {var err error pxy.lock.RLock() defer pxy.lock.RUnlock() Send directly to downstream nodes via connections in the CONN wrapper and vice versa if p.id ()! = pxy.upstreamNode.ID && pxy.upstreamConn ! = nil { err = pxy.upstreamConn.rw.WriteMsg(msg) } else if p.ID() == pxy.upstreamNode.ID && pxy.downstreamConn ! = nil { err = pxy.downstreamConn.rw.WriteMsg(msg) } else { fmt.Println("One of upstream/downstream isn't alive: ", pxy.srv.Peers()) } if err ! = nil { fmt.Println("relaying err: ", err) } }Copy the code

At this point, a basic “squatting” proxy server is written. You don’t have to worry about being robbed of your seat

Complete code please stampxiaoyao1991/manspreading


About the author:

Mr. Qian (BS’14, MS’18), MASTER’s degree in Computer Science at UIUC. Studied under Zcash Andrew Miller. 3 years Internet working experience. He is currently a technical partner at NAD Grid (Nadgrid.com). NAD Grid connects the energy world.