1. Why do YOU need an API gateway

In business development, the background often exposes interfaces to different clients, such as App, web, small program, third-party manufacturers, device, etc., because the technology stack is relatively unified, Spring Boot Web development framework is used. Therefore, at the beginning of the unified encapsulation of such as authentication, traffic limiting, security policy, logging and other integrated toolkit, development only need to introduce the toolkit to achieve the above functions, the request end through Nginx will request to the corresponding micro-service cluster node.

As services expand, more and more interfaces are exposed, and some problems are also exposed:

  • Develop toolkits for different languages, such asgo/c++Microservice nodes implemented by language;
  • There are many security policies for different request clients, and the customization is serious, which leads to the complexity of toolkit development.
  • When the toolkit is updated, many server nodes are affected;
  • The effect of concurrent control is not good, and the parameters of traffic limiting degradation need to be adjusted.

The emergence of API gateway is to solve the above problems, a qualified API gateway should have the following characteristics:

  • High availability: The gateway must be the only gateway to provide stable and reliable services.
  • High performance: All request traffic passes through the gateway layer, requiring high concurrent requests.
  • High security: can prevent external malicious access, support digital signature, user authority verification, blacklist and whitelist, firewall, etc., to ensure the security of each micro-service;
  • High scalability: Provides services such as traffic control, protocol forwarding, and log monitoring, and provides good compatibility for non-service function expansion.

In this way, the API gateway layer is added to the whole technical architecture. After the requestor passes Nginx, the gateway forwards the request traffic to the corresponding microservice cluster according to specific policies/rules. This allows developers to focus on the concrete logic of the code instead of spending time thinking about interface interactions with various requesters.

2. Gateway technology selection

Before this, our team made an internal selection of the current microservice gateway technology. This paper compares the current Spring Cloud Gateway, Kong, OpenRestry, Zuul2, and Soul gateway middleware in terms of current mainstream gateway features, such as traffic limiting, authentication, monitoring, ease of use, maintainability, and maturity. The material content mainly comes from the relevant practice summary of each official website and technical bull. Finally, based on the consideration of various characteristics, the Soul gateway is used.

3. Gateway combat

3.1 System technical architecture

At that time, our background system architecture has realized microsertization, business and technology are evolving toward the direction of the mid-stage, a variety of infrastructure and components are complete, in the existing API layer added a unified gateway layer, using Soul gateway middleware.

3.2 Soul Gateway Introduction

Soul address: dromara.org/projects/so… , the Gateway middleware reference Kong, Spring-Cloud-Gateway and other excellent Gateway design of an asynchronous, high-performance, cross-language, responsive API Gateway. The gateway provides the following functions and features:

  • Support for various languages, seamless integrationDubbo, SpringCloud;
  • Rich plug-in support, authentication, limiting, fusing, firewall and more;
  • The gateway can be dynamically configured with multiple rules and policies.
  • Plug-in hot swappable, easy to expand;
  • Cluster deployment is supportedA/B Test.

3.3 Architecture design of Soul Gateway

This is an introduction to the Soul website, which provides a complete description of the entire gateway deployment architecture and solution.

It can be divided into three parts:

  • Soul-admin: control management terminal, which can manage applications, authorization, plug-ins, forwarding and load balancing rules, traffic limiting adjustment, and service provider metadata registration.

  • Soul-client: Provides an INTEGRATED SDK for each service node and automatically registers different types of service nodes (Spring MVC/Dubbo/SpringCloud) on the gateway control side.

  • Soul-web: The gateway node is based on the Spring-reactor model, through the control end to achieve request forwarding, traffic load balancing and other functions, in order to improve the performance, all the configuration information is implemented by subscription update + local cache.

4. Implementation principle

4.1 Reactive programming

Reactive programming model is a solution proposed by Microsoft to cope with high concurrency, and has developed rapidly since then. Among them, Rxjava and Akka frameworks are common in Java. Reactive programming usually has the following characteristics:

  • Event-driven: In an event-driven application, interactions between components are implemented through loosely coupled producers and consumers. These events are sent and received asynchronously and non-blocking.
  • Real-time response: The system pushes data to consumers when producers have a message, rather than having consumers constantly poll or wait for data in a wasteful way.

There are a few concepts that need to be explained:

  • Reactive StreamsIs a set of reactive programming standards and specifications;
  • ReactorIs based onReactive StreamsA reactive programming framework;
  • WebFluxReactorBased on the implementationWebReactive programming framework for domains.

Sring Boot2.0 supports WebFulx responsive programming with the following component classes:

  • Mono:To achieve theorg.reactivestreams.PublisherInterface, representing publishers of 0 to 1 elements;
  • Flux:To achieve theorg.reactivestreams.PublisherInterface, representing publishers of 0 to N elements;
  • Scheduler:The scheduler that drives reactive flows is typically implemented by various thread pools.

The Spring framework provides the WebHandler interface for customizing service requests. The SoulWebHandler class defines the # Handle method and returns the Mono event publisher object. The plug-in link here is stored using List, and the control side pulls the plug-in List information during startup.

public final class SoulWebHandler implements WebHandler { ... // Initialize the number of scheduler threads based on the configuration or the number of server cpus. Public SoulWebHandler(final List<SoulPlugin> plugins) {this.plugins = plugins; String schedulerType = System.getProperty("soul.scheduler.type", "fixed"); if (Objects.equals(schedulerType, "fixed")) { int threads = Integer.parseInt(System.getProperty( "soul.work.threads", "" + Math.max((Runtime.getRuntime() .availableProcessors() << 1) + 1, 16))); scheduler = Schedulers.newParallel("soul-work-threads", threads); } else { scheduler = Schedulers.elastic(); } @override public Void> handle(Final ServerWebExchange exchange) { Return new DefaultSoulPluginChain(plugins).execute(exchange). SubscribeOn (scheduler); }}Copy the code

4.2 Plug-in design

Responsibility chain is a method used in OOP development to decouple design and has flexible scalability. Soul will receive the entire front-end request, proxy request back-end, receive back-end response, response front-end request plug-in processing, can be extended for the whole process development.

private static class DefaultSoulPluginChain implements SoulPluginChain { ... @Override public Mono<Void> execute(final ServerWebExchange exchange) { return Mono.defer(() -> { if (this.index < Plugins.size () {SoulPlugin plugin = plugins.get(this.index++); Boolean skip = plugin.skip(exchange); If (skip) {return this.execute(exchange); if (skip) {return this.execute(exchange); } else {return plugin.execute(exchange, this); } } else { return Mono.empty(); }}); }} public interface SoulPlugin {// Handle the request Mono<Void> execute(ServerWebExchange exchange, SoulPluginChain chain); // PluginTypeEnum pluginType(); Int getOrder(); // Plugin name, unique String named(); Boolean skip(ServerWebExchange exchange); }Copy the code

4.3 Registering service metadata

Soul-client developed a component for automatically registering service metadata based on the Spring framework. Service metadata is defined as follows:

@data public class MetaData implements Serializable {// Only private String implements Serializable; Private String path; // Remote call type HTTP /dubbo private String rpcType; // serviceName private String serviceName; // methodName private String methodName; // Method parameter list private String parameterTypes; Private String rpcExt; Private Boolean enabled; }Copy the code

The Soul framework currently supports metadata registration for service providers such as Dubbo/Spring MVC. Taking the HTTP interface provided by Spring MVC as an example, the Bean post-processor is used to scan the Bean object with @Controller annotation and @RestController annotation, and extract the relevant information of the service metadata. The request is sent to the management control end using OkHttpClient. Implement service metadata registry.

public class SoulClientBeanPostProcessor implements BeanPostProcessor { ... @Override public Object postProcessAfterInitialization( @NonNull final Object bean, @nonNULL Final String beanName) throws BeansException {// Search for the related annotation parameter Controller Controller = AnnotationUtils.findAnnotation(bean.getClass(), Controller.class); RestController restController = AnnotationUtils.findAnnotation(bean.getClass(), RestController.class); RequestMapping requestMapping = AnnotationUtils.findAnnotation(bean.getClass(), RequestMapping.class); if (controller ! = null || restController ! = null || requestMapping ! = null) { String contextPath = soulHttpConfig.getContextPath(); String adminUrl = soulHttpConfig.getAdminUrl(); if (contextPath == null || "".equals(contextPath) || adminUrl == null || "".equals(adminUrl)) { return bean; } / set/get methods final Method. [] the methods = ReflectionUtils getUniqueDeclaredMethods (bean. GetClass ()); // Assemble the service metadata one by one and send for (Method Method: methods) { SoulClient soulClient = AnnotationUtils.findAnnotation(method, SoulClient.class); Executorservice.execute (() -> post(buildJsonParams(soulClient)) {executorService.execute() -> post(buildJsonParams(soulClient)) contextPath, bean, method))); } } } return bean; }}Copy the code

4.4 Request proxy implementation

All gateway design is based on the proxy mode, the front-end request through the gateway and proxy forward to the back-end service node. The front-end request format must comply with the following request format.

Public class RequestDTO implements Serializable {// module name private String Module; Private String method; // Remote call type private String rpcType; // HTTP method private String httpMethod; Private String sign; Private String timestamp; Private String appKey; // HTTP request path private String path; // application contextPath private String contextPath; // The real request path, after the request forward processing plug-in to fill the private String realUrl; // Service MetaData, after the request forward processing plug-in to populate the private MetaData MetaData; // Dubbo request parameters, after the request forward processing plug-in to fill the private String dubboParams; // The request start timestamp is filled with private LocalDateTime startDateTime after the request forwarding processing plug-in; . }Copy the code

The routing plug-in will find the upstream node in the management module UpstreamCacheManager to find the upstream node’s real request path, and the request URL into the context of the request for the next plug-in node to use.

public class DividePlugin extends AbstractSoulPlugin { private final UpstreamCacheManager upstreamCacheManager; . @Override protected Mono<Void> doExecute(final ServerWebExchange exchange, final SoulPluginChain chain, final SelectorData selector, final RuleData rule) { ... final DivideRuleHandle ruleHandle = GsonUtils.getInstance().fromJson(rule.getHandle(), DivideRuleHandle.class); Final List<DivideUpstream> upstreamList = final List<DivideUpstream> upstreamList = upstreamCacheManager.findUpstreamListBySelectorId(selector.getId()); . // Get the actual calling node's IP final String IP = Objects.requireNonNull(exchange.getRequest().getRemoteAddress()).getAddress().getHostAddress(); DivideUpstream divideUpstream = LoadBalanceUtils.selector(upstreamList, ruleHandle.getLoadBalance(), ip); . // set HTTP url String domain = buildDomain(divideUpstream); String realURL = buildRealURL(domain, requestDTO, exchange); // Set the timeout period. Return chain.execute(exchange); }}Copy the code

UpStreamCacheManager mainly manages upstream server nodes. The node container can obtain the cached node list through the ID of the selector, and the node container can receive events to dynamically update the stored service node address list.

public class UpstreamCacheManager { private static final BlockingQueue<SelectorData> BLOCKING_QUEUE = new LinkedBlockingQueue<>(1024); Private static final Map<String, List<DivideUpstream>> UPSTREAM_MAP = maps.newConCurrentMap (); . public void execute(final SelectorData selectorData) { final List<DivideUpstream> upstreamList = GsonUtils.getInstance().fromList(selectorData.getHandle(), DivideUpstream.class); if (CollectionUtils.isNotEmpty(upstreamList)) { UPSTREAM_MAP.put(selectorData.getId(), upstreamList); } else { UPSTREAM_MAP.remove(selectorData.getId()); }}}Copy the code

There are two types of NettyClient and WebClient client implementations on the HTTP request side. The main request process is implemented in two container classes of WebClientPlugin and NettyHttpClientPlugin. Basically, the request information from the front end is repopulated to construct an identical request and then the request is sent to the back-end service node, and the returned Mono is attached to the scheduler. Each service provides a service marked with MetaData.

public class WebClientPlugin implements SoulPlugin { @Override public Mono<Void> execute(final ServerWebExchange exchange, final SoulPluginChain chain) { final RequestDTO requestDTO = exchange.getAttribute(Constants.REQUESTDTO); assert requestDTO ! = null; // Get the URL String urlPath = exchange.getAttribute(Constants.HTTP_URL) in the request context; Method = httpmethod.valueof (exchange.getrequest ().getMethodValue())); / / get request parameter WebClient. RequestBodySpec RequestBodySpec = WebClient. Method (method). The uri (urlPath); . Request handleRequestBody(requestBodySpec, exchange, timeout, chain, userJson); }}Copy the code

In contrast to HttpClient, the Dubbo request proxy implementation is more complex and requires some generalization call processing. The generalized interface invocation method is mainly used in the case that there is no API interface class and model class element (such as poJO class for input and output parameters) on the service consumer side, and the parameters and return values are represented by Map.

public class DubboPlugin extends AbstractSoulPlugin { ... @Override protected Mono<Void> doExecute(final ServerWebExchange exchange, final SoulPluginChain chain, final SelectorData selector, final RuleData rule) { final String body = exchange.getAttribute(Constants.DUBBO_PARAMS); final RequestDTO requestDTO = exchange.getAttribute(Constants.REQUESTDTO); assert requestDTO ! = null; / / build generalization services final Object result. = dubboProxyService genericInvoker (body, requestDTO for getMetaData ()); // Return chain.execute(exchange); }Copy the code

The whole Dubbo request-response process is realized by DubboProxyService, which constructs the generalized service consumer first and returns the result after making the service request.

public class DubboProxyService { ... public Object genericInvoker(final String body, final MetaData metaData) throws SoulException { ReferenceConfig<GenericService> reference; GenericService genericService; try { // reference = ApplicationConfigCache.getInstance().get(metaData.getServiceName()); . genericService = reference.get(); } catch (Exception ex) { ... } try { ... return genericService.$invoke(metaData.getMethodName(), new String[]{}, new Object[]{}); . } catch (GenericException e) { ... }}}Copy the code

The Dubbo service generalization calls use manual construction of the service consumer. ApplicationConfigCache caches the constructed service consumer to improve performance.

Public Final Class ApplicationConfigCache {public Final Class ApplicationConfigCache {public Final Class ApplicationConfigCache {public final Class ApplicationConfigCache {public final Class ApplicationConfigCache { Private final LoadingCache<String, ReferenceConfig<GenericService>> cache = CacheBuilder.newBuilder() .... ; Public void init(Final String Register) {if (applicationConfig == null) {applicationConfig = new ApplicationConfig("soul_proxy"); } if (registryConfig == null) { registryConfig = new RegistryConfig(); registryConfig.setProtocol("dubbo"); registryConfig.setId("soul_proxy"); registryConfig.setRegister(false); registryConfig.setAddress(register); Public ReferenceConfig<GenericService> initRef(final MetaData MetaData) {try { ReferenceConfig<GenericService> referenceConfig = cache.get(metaData.getServiceName()); if (StringUtils.isNoneBlank(referenceConfig.getInterface())) { return referenceConfig; } } catch (Exception e) { LOG.error("init dubbo ref ex:{}", e.getMessage()); } return build(metaData); Public ReferenceConfig<GenericService> build(final MetaData MetaData) { ReferenceConfig<GenericService> reference = new ReferenceConfig<>(); String rpcExt = metaData.getRpcExt(); DubboParamExt DubboParamExt = gsonutils.getInstance ().fromjson (rpcExt, dubbopAramext.class); . } catch (Exception e) { ... } try {// initialize Object obj = reference.get(); if (obj ! = null) {cache.put(metadata.getServicename (), reference); } } catch (Exception ex) { ... } return reference; }... }Copy the code

4.5 Distributed traffic limiting

The framework has its own redis based on the flow limiting implementation, using the Spring framework to load Lua scripts, and then through the Redis client to operate.

public class RedisRateLimiter {

    public Mono<RateLimiterResponse> isAllowed(final String id, 
                final double replenishRate, final double burstCapacity) {
      
        try {
            List<String> keys = getKeys(id);
            List<String> scriptArgs = Arrays.asList(replenishRate + "", burstCapacity + "",
                    Instant.now().getEpochSecond() + "", "1");
            Flux<List<Long>> resultFlux = 
                Singleton.INST.get(ReactiveRedisTemplate.class)
                    .execute(this.script, keys, scriptArgs);
            return resultFlux.onErrorResume(throwable -> 
                    Flux.just(Arrays.asList(1L, -1L)))
                    .reduce(new ArrayList<Long>(), (longs, l) -> {
                        longs.addAll(l);
                        return longs;
                    }).map(results -> {
                        boolean allowed = results.get(0) == 1L;
                        Long tokensLeft = results.get(1);
                        RateLimiterResponse rateLimiterResponse = new RateLimiterResponse(allowed, tokensLeft);
                        LogUtils.debug(LOGGER, "RateLimiter response:{}", rateLimiterResponse::toString);
                        return rateLimiterResponse;
                    });
        } catch (Exception e) {
            ...
        }
        return Mono.just(new RateLimiterResponse(true, -1));
    }
Copy the code

Lua script based on the principle of time window.

local delta = math.max(0, now-last_refreshed)
local filled_tokens = math.min(capacity, last_tokens+(delta*rate))
local allowed = filled_tokens >= requested
local new_tokens = filled_tokens
local allowed_num = 0
if allowed then
  new_tokens = filled_tokens - requested
  allowed_num = 1
end
Copy the code

4.6 Distributed Configuration cache

The local cache of gateway node stores the mapping relation of container name <-> container information, container name <-> selector, selector <-> rule data and so on.

public abstract class AbstractLocalCacheManager implements LocalCacheManager { /** * pluginName -> PluginData. */ static  final ConcurrentMap<String, PluginData> PLUGIN_MAP = Maps.newConcurrentMap(); /** * pluginName -> SelectorData. */ static final ConcurrentMap<String, List<SelectorData>> SELECTOR_MAP = Maps.newConcurrentMap(); /** * selectorId -> RuleData. */ static final ConcurrentMap<String, List<RuleData>> RULE_MAP = Maps.newConcurrentMap(); /** * appKey -> AppAuthData. */ static final ConcurrentMap<String, AppAuthData> AUTH_MAP = Maps.newConcurrentMap();Copy the code

The framework supports WebSocket, Zookeeper and other technologies to synchronize local cache changes.

public class ZookeeperSyncCache extends CommonCacheHandler implements CommandLineRunner, DisposableBean { ... @override public void run(final String... args) { watcherData(); watchAppAuth(); watchMetaData(); } private void watcherData() { final String pluginParent = ZkPathConstants.PLUGIN_PARENT; // If (! zkClient.exists(pluginParent)) { zkClient.createPersistent(pluginParent, true); } / / gets the child nodes of the plug-in node List < String > pluginZKs = zkClient. GetChildren (ZkPathConstants. BuildPluginParentPath ()); for (String pluginName : pluginZKs) { loadPlugin(pluginName); } / / subscribe event zkClient zk node. SubscribeChildChanges (pluginParent, (parentPath, currentChildren) -> { if (CollectionUtils.isNotEmpty(currentChildren)) { for (String pluginName : currentChildren) { loadPlugin(pluginName); }}}); }}Copy the code

The above is the Soul gateway implementation of the main technology implementation principle, including log monitoring and other high-performance implementation design, from the performance of the production environment, also use 4 which have a lot of design ideas simple to understand, worth careful reading.

5. To summarize

This article is a practical summary of the micro-service architecture optimization in my former employer. It mainly describes the role and technology selection of the micro-service API gateway, the technical architecture of the background system at that time and the main implementation principle of the Soul gateway being used.

reference

Blog.csdn.net/daniel7443/… Introduction to the spring reactor

Blog.csdn.net/tianyaleixi… Gateway technology selection

Introductory zhuanlan.zhihu.com/p/45351651 Spring reactor