Writing in the front

Relational databases have been the preferred way to store structured data for decades. From universities to industry, relational database has always been the core of data research and application, and also promoted a large number of talents engaged in database development, maintenance and tuning. In recent years, with the birth and popularity of various NoSQL databases and Hadoop technology ecology, RDBMS seems to have been a huge challenge, with a “rigorous and rigid” image of the relational database was once sung by the market decline. It turns out, however, that even in the face of competition from many upstarts, the long-established relational database has not died out, but has continued to innovate and continue to play a central role in modern back-end data architectures.

Microsoft’S SQL Server database, a standout in the business relational database camp, consistently ranks among the top three in POPULARITY of DB-Engines databases. Benefited from the convenient graphical management interface and easy to use characteristics, many people’s relational database journey, is to start from SQL Server. As SQL Server 2017 launches, let’s take a look at the past and present of SQL Server and explore the development path of Microsoft data platform.

Microsoft SQL Server itself has a legendary history, originally by Microsoft, Sybase, Ashton-Tate (the company that developed dBase) three cooperation, Sybase SQL Server database to OS/2 operating system and was born. Later, with the failure of OS/2 and the strength of The Windows NT operating system, Microsoft stopped working with Sybase and began to focus on developing and maintaining the database product independently for the Windows platform, hence the origin of Microsoft SQL Server. To avoid confusion, Sybase also renamed its database from Sybase SQL Server to Adaptive Server Enterprise(ASE), so SQL Server now refers only to Microsoft relational databases. Sybase, whose database products were once popular in financial and other industries, was bought by SAP in 2010.

Microsoft SQL Server, which parted ways with Sybase, is moving steadily forward. SQL Server 7.0 and SQL Server 2000 basically completed a large number of rewrites and extensions on the basis of the original Sybase code, officially entered the ranks of enterprise database; And SQL Server 2005 is really mature, with Oracle, IBM DB2 formed a three-way commercial database. After the continuous investment and evolution of SQL Server 2008, 2008 R2, 2012, 2014 and 2016 versions, the latest version OF SQL Server 2017 was officially released on October 2, 2017, which is also the protagonist of this article.

In recent years, review

Before we start analyzing the new features of SQL Server 2017, let’s take a look at the focus of SQL Server in recent years.

One focus is OLAP. The traditional relational database generally uses the basic processing and storage unit to facilitate the increase, deletion, change and search of data records. However, in the face of increasing data analysis and aggregation queries, row-based storage and processing becomes somewhat clumsy and inefficient. Therefore, in SQL Server 2012 code-named Denali, Microsoft introduced a new xVelocity column storage and analysis engine, making SQL Server has a high quality column storage implementation. And a query execution engine highly optimized for vectorization and batch processing. This OLAP loaded data table can choose to build a non-clustered Column Store Index based on Column storage, greatly improving the performance of analysis queries. In subsequent VERSIONS of SQL Server, Column storage features have been further enhanced and improved, including the introduction of Clustered Column Store Index to enable data tables to be stored entirely in Column format. The delta-store mechanism also supports data update of column storage tables, further broadening application scenarios and enhancing ease of use.

I remember the shock of trying SQL Server 2012’s column storage index for the first time on my Thinkpad T410 years ago, and the impressive results of an analytical query on nearly 100 million rows of data returned in seconds on a regular laptop. Over the years, Microsoft’s column storage implementation has become one of the most efficient and reliable column storage designs in the world. Microsoft has even exported relevant technologies to a certain extent: Working with HortonWorks, Microsoft has contributed a lot of code to the Apache Hive project in the open source world (known as Project Stinger) to help Hive speed up, The essence of the idea comes from SQL Server’s xVelocity column storage and analysis engine.

The second focus is OLTP. Traditional relational databases are inherently good at OLTP, so what can be improved? One of the problems with the history of computing is that classical database storage mechanisms are mostly designed and optimized for disk structures, with expensive memory resources acting more like caching. With the development of hardware technology and the increasing of server memory, the storage and indexing of data in many scenarios can actually be fully incorporated into memory – this provides the conditions for the memory-based database engine. When a database engine and its storage are completely optimized and designed for memory, the increase in performance can be dramatic. Microsoft made a big investment in this area a few years ago, with its in-memory database technology, codenamed Hekaton, first announced in late 2012 and later with SQL Server 2014. Combined with its lock-free concurrency model and Natively Compiled Stored procedures written in SQL, Hekaton can often achieve tens of times better performance than traditional solutions, completely changing the design of many systems. Realized the high load scene support which was difficult to reach before.

As you can see, as of SQL Server 2016, Microsoft’s flagship product is very powerful and complete, providing a perfect blend of OLAP and OLTP workloads within a single product. So just one year later, what new surprises can the latest SQL Server 2017 bring? Let’s find out.

Cross-platform and containerization of new features

The first change SQL Server 2017 has to mention is not a feature, but a change to its operating environment: support for Linux servers. This is the first time that Microsoft’s SQL Server family of products will officially run on Linux with full official support. This has greatly expanded the application scenarios and customer base of SQL Server. Keep in mind that while Windows Server licensing is not expensive, many companies that rely on the Linux ecosystem as their primary technology stack do not consider purchasing and shipping wikis to Windows back-end servers — so when choosing a technology, SQL Server may have been ruled out in the first place. When SQL Server 2017 officially supports Linux, this barrier will be removed and SQL Server will finally be able to compete with competitors in a new battlefield, which will undoubtedly help increase its market share.

Of course, SQL Server, which appears to be deeply coupled to Windows, is not an easy task to implement perfectly on Linux. How does this work? There are two key factors:

First, SQL Server originally has called SQL OS underlying infrastructure, this component can bypass the operating system and Win32 API restrictions, directly manage and organize CPU, memory and other computing and storage resources, as well as suitable for their own thread scheduling, in order to make full use of the underlying hardware performance;

The second is the DrawBridge project from Microsoft Research, which was originally designed to implement the application sandbox mechanism. Its Library OS component implements more than a thousand commonly used Windows apis with only about 50 underlying kernel calls. It also has the ability to host other components such as MSXML and CLR.

The SQL Server team made a great attempt to build on these two existing foundations, rewrote them as needed and fully integrated them to form a new generation of underlying package SQLPAL(SQL Platform Abstraction Layer), At the same time, the upper-level logic code is ported to SQLPAL – this is the key to SQL Server being able to achieve cross-platform in a short time. At the same time, SQLPAL, this special abstraction mechanism, does not pay the performance cost of the usual sense of abstraction, because it circumvents the constraints of the operating system to a certain extent, and builds its own required operating environment from the bottom up.

SQL Server on Linux

As mentioned earlier in our history review, SQL Server struggled to stand alone and grow in the 1990s, strategically focusing on the Windows platform. More than 20 years later, SQL Server is now taking a critical step toward embracing Linux, which is also a strategic shift. History seems to go round and round, but in fact it is the best business choice in different times and circumstances.

To support SQL Server on Linux, Microsoft has also partnered with established Linux vendors, such as Red Hat, to provide customers with integrated solutions to ensure reliability at the operating system level and compatibility with SQL Server. SQL Server can be installed on Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Ubuntu. Along with Linux support comes support for containers. SQL Server 2017 now runs perfectly with Docker, Microsoft has also released an official SQL Server image based on SQL Server on Linux on Docker Hub (Microsoft also offers a version for Windows containers). The main point of SQL Server containerization is to support DevOps. In the past, it was difficult to start or shut down a heavy-duty database instance like SQL Server automatically on demand. Now, with containers and images, it is very easy to start and stop SQL Server with Docker and easy to integrate with DevOps processes through scripts.

I have done this in my open source class library. I migrated a series of integration tests that relied on SQL Server LocalDB to the new SQL Server container, which successfully removed the dependency of SQL Server installation on the local test environment. It also clears the way to run tests in a remote CI environment. Of course, in addition to DevOps, you can also try to use containerized SQL Server directly in the production environment so that the database can be integrated into the production environment container choreography. For lack of space, I will not expand here.

As mentioned above, SQL Server 2017 has taken a big step from a windows-based strategy to support Linux, Docker containers, and Windows. This decision will undoubtedly have a profound impact on SQL Server and even the database market.

New feature for graph data processing

Learning from the emerging NoSQL is also an important feature of the development of modern relational databases. For example, document database is good at handling JSON data, in SQL Server 2016 has become a first-class citizen, storage, index, query and other aspects of the comprehensive support. In SQL Server 2017, the database old-timers moved with The Times and started learning from upstarts like Neo4j, boldly introducing graph processing and support.

As we all know, the core entities of graph databases are nodes and edges, which usually have some attributes and then nodes are related to each other through edges. In the traditional relational database, it is not difficult to model the graph. The information of the graph can be expressed completely by setting up tables for nodes and edges and associating them with foreign keys. However, the main problem lies in query and query performance: some typical graph query expression is clumsy and difficult in traditional relational database, especially when multi-hop between nodes of the graph, SQL writing is more prone to error, query performance can not be guaranteed.

The graph database feature in SQL Server 2017 attempts to address both of these issues by still modeling and storing graph information in the form of tables, but with additional extensions that greatly improve ease of use. When you create a table, you can now specify using t-SQL’s extended syntax whether the table is AS NODE or AS EDGE in a graph database. SQL Server then implicitly adds $node_id or $edge_id, $from_id, $to_id to the table to help record nodes and relationships between nodes. When querying, SQL Server 2017 partly borrowed the syntax of Cypher query language in Neo4j, and introduced MATCH keywords to help users express node tour in directed graph in asciI-ART way. At the same time perfect integration into the existing SQL query system. Let’s look at an official query example:

As you can see, the above T-SQL fragment is a handy way to express a “two-hop” query for “John’s friend’s favorite restaurant” with two different join types, and the arrows in ACSII characters are good directional edges. Compared with traditional SQL writing with the same semantics, this expression is undoubtedly much clearer and more intuitive.

Figure database features have emerged in SQL Server 2017. Of course, since this feature is introduced for the first time, it is objectively not fully mature. For example, advanced queries commonly used in graph databases such as transitive closure (a determination of connectivity between nodes with an unlimited number of hops) and polymorphism (querying the different types of nodes a node can reach) are not supported. In terms of performance, SQL Server currently mainly uses only the table join optimization made by the existing index and query optimizer of two-dimensional tables, and has not introduced the special data structure for graph optimization. Expect further enhancements in these directions in future releases and updates.

New features for in-database machine learning

Machine learning is definitely a buzzword in recent years and an integral part of modern data applications. Although Microsoft has strong research strength and achievements in this area, but in the construction of related development ecology is slow. In the early years, SQL Server Analysis Services also had built-in data mining functions, but the application scenarios were very limited due to heavy and difficult program integration. It’s only in recent years that Microsoft has been making a big push: in early 2015, it decisively bought Revolution Analytics. This acquisition is critical in that it gives Microsoft a comprehensive open source and commercial solution for the R language ecosystem, covering the vast R language community and data scientists.

So as a classical relational database, how does SQL Server cope with and adapt to this new trend? SQL Server 2016 releases breakthrough SQL Server R Services thanks to Revolution Analytics’ acquisition: Enables the R language and its ecology to run directly as a service within the SQL Server environment. In SQL Server 2017, another language with a strong AI ecosystem has been added: Python. The original R Services was also renamed Machine Learning Services along with the newly introduced Python Services.

The core product idea of Machine Learning Services is to run Machine Learning loads directly within the database. It allows familiar Python/R scripts and numerous Machine Learning libraries to run on the database server, seamlessly connecting with SQL. How does this approach compare to traditional machine learning outside of the database? In the author’s opinion, its core advantage lies in the word “integration”, which can be understood from several aspects:

One is convenient data integration, that is, SQL can be used to access all kinds of business data without complicated data movement and preparation, and it can be supplied to various AI algorithms in Python/R as input. Second, efficient model integration, that is, the model after training can be managed and stored using SQL Server, and the model can be easily called through stored procedures and obtain results. SQL Server will automatically help you complete efficient data transmission and serialization between the execution engine and Python/R running environment. The third is painless application integration, that is, applications can acquire machine learning capabilities through traditional database connections and stored procedure return values, without complex rituals and special architecture, everything is consistent with traditional database access.

Of course, to be fair, the design of in-database machine learning also has its technical limitations, mainly in terms of scalability: due to the high computing resource consumption required during model training, a stand-alone server can be a potential bottleneck. Even though the SQL Server can use performance beyond the open source R implement ScaleR/ScalePy/MicrosoftML support multithreading and GPU acceleration of commercial grade class libraries, but might still not suitable for large amount of data to the need to use the occasion of large-scale cluster training. In addition, concerns were raised about the development, testing, and operational costs of integrating Python/R scripts into the database. These are also areas that Microsoft will consider enhancements in the future, and R Tools for Visual Studio has made some efforts in this area.

Compared with similar solutions, SQL Server Machine Learning Services has its unique advantages. The idea of in-database machine learning makes it easier and more straightforward to embrace AI, which means lower development costs and an earlier lead time. It is particularly suitable for existing SQL Server-based enterprise applications for machine learning attempts and upgrades. Let us look forward to the birth of more practical cases, and look forward to its further improvement.

New feature for adaptive query processing

If in-database machine learning brings intelligence to developer applications, can SQL Server’s query execution engine itself become smarter and smarter? The answer is yes, and this is where SQL Server 2017 is heading, with a related set of features called Adaptive Query Processing.

The key to efficient execution of SQL queries is to have a reasonable execution plan. Generally speaking, the execution plan is generated before the query is executed and is synthesized by the execution engine based on metadata information such as statistics and index status, combined with the magnitude of the execution steps and the intermediate result set. Simultaneous execution plans are often cached so that queries of the same form can reuse the query plan to improve overall query performance — all very nice, right? In the real world, however, these subtle mechanisms also run into challenges and thorny issues. For example, parameterized queries are often “sensitive” to parameter input. Different parameters will lead to large fluctuations in the magnitude of data results, and it is difficult to predict in advance and select the optimal query plan. Another example is that the statistical information is not updated in time, which leads to the misestimation of the size of the intermediate result set and the wrong plan is chosen. Hint can be used to solve a short-term problem that may backfire in the long run.

In order to alleviate the above pain points, it is necessary to introduce the intelligence and self-tuning at the time of execution. The adaptive query processing of SQL Server 2017 is explored and implemented in a beneficial way. The core idea is to delay the generation of the complete execution plan to a certain extent and wait for part of the actual input set to be in place before choosing the strategy of the downstream execution mode. For example, the newly introduced adaptive join operator can dynamically choose whether a join strategy should use nested loop or hash join based on the precise magnitude of upstream data input. For example, multi-statement table valued function (MSTVF) no longer uses fixed magnitude estimation, but can first execute the function and then formulate the downstream execution plan to ensure better execution efficiency. These new features were praised by one DBA as “saving me half of the query performance tuning I did last week.”

In addition to delayed plan generation at execution time, the database system can get feedback from queries that have already been executed so that subsequent queries of the same form can obtain a more appropriate query plan, which is a more advanced form than simple execution plan caching. In this regard, SQL Server 2017 implements the memory grant feedback function in batch mode, which can help subsequent queries learn “lessons” from the memory allocation practices of previous queries to achieve accurate estimation and allocation of memory resources.

As you can see, SQL Server 2017 has started to make a big push in execution intelligence through adaptive query processing, enabling automatic adaptation and optimization for a range of scenarios while being transparent to developers. The main limitation is that many of these features only work in batch mode (seen in queries against column stores), not in traditional row mode. It is believed that Microsoft will add more support for row processing mode in this field in the future, and further expand the applicable scenarios of adaptive query processing.

SQL Server and big data

In the era of big data, distributed data processing and computing frameworks represented by Hadoop emerge one after another and achieve unprecedented ecological prosperity. SQL on Hadoop solutions represented by Hive, Impala, Presto and so on have launched a strong impact on the traditional database. In this era, relational databases are a little awkward. So how is SQL Server responding to and embracing the “big data” wave?

One solution is to strengthen the data storage and processing capabilities of SQL Server and expand application scenarios, that is, SQL Server can also process big data. Thanks to the continuous development of Server hardware technology, the capacity and processing capacity of a single Server have been continuously increased. With the excellent partitioning mechanism and column storage compression, SQL Server is actually able to process a considerable amount of data. Last year, Microsoft and Intel worked together on an interesting experiment that used a single SQL Server Server to host up to 100 terabytes of data. Detailed tests showed impressive overall performance. In fact, the magnitude of 100 TB has been able to meet many storage and computing “big data” needs. In many scenarios, SQL Server is easier to manage and maintain, with a stable and simple architecture, and a lower overall cost than Hadoop clustering solutions that are perceived as over-engineering.

Of course, a single server will still be overwhelmed if it faces more than petabytes of data. For this level of Data, Microsoft has a product solution called SQL Server Parallel Data Warehouse (PDW), which can be understood as a distributed variant of SQL Server, Is based on SQL Server core to build an MPP analytical database. PDW usually sells hardware and software in cooperation with hardware manufacturers, and can easily support petabyte data storage and analysis.

The second, of course, is full convergence and integration with big data open source solutions, especially the de facto standard for big data: Hadoop. The protagonist here is The PolyBase technology officially introduced by Microsoft in SQL Server 2012 PDW. It is no longer unique to the PDW version, but has become an important bridge between standard SQL Server and Hadoop. PolyBase’s core capability is the ability to define hadoop-oriented external tables in an SQL Server context and execute SQL against them — a design approach that is very similar to Hive’s external tables and PostgreSQL’s FDW extensions. Based on these core capabilities, it is easy to import large amounts of data from Hadoop into SQL Server and export SQL Server data into big data clusters in reverse. Compared to a pure data movement solution like Sqoop, PolyBase has better performance (based on direct HDFS access rather than MapReduce) and benefits from the abstraction of external tables, allowing joint queries with local database tables without dropping external Hadoop data.

Visible from the above, in the face of the surging tide of big data, Microsoft did not stand pat. While strengthening its own capabilities, SQL Server has also chosen to coexist harmoniously with the big data ecosystem and integrate with each other.

SQL Server and cloud computing

With cloud computing in full swing today, embracing and supporting the cloud is part of every database product’s agenda. In this regard, SQL Server is a high achiever and an important part of Microsoft’s cloud computing strategy.

As a first step towards embracing the cloud, Microsoft has no doubt offered SQL Server as a PaaS service. The service was originally launched in 2010 as SQL Azure and later as Azure SQL Database. With traditional commercial databases, the procurement and installation process is often lengthy and costly for developers to try, even if they are interested — with the advent of Azure SQL Database, a hosted Database instance is ready to go with a single click in the Azure portal. The change on this one business model, undoubtedly reduced the use threshold of Microsoft database greatly, expanded the use scenario.

Technically, Azure SQL Database is far from simply providing SQL Server services through virtual machines. Azure SQL Database is a PaaS product that builds on SQL Server capabilities and is designed entirely for the cloud. Synchronize with SQL Server function upgrade, and have good elastic scaling characteristics. Azure SQL Database uses the concept of Data Transaction Unit (DTU) to describe the performance level of SQL Database instances. Users can adjust the size of the database instance DTU on demand at any time to match workload and performance levels over time, effectively saving costs. In addition, Azure SQL Database supports the Elastic Pool feature, which allows a group of Database instances to share a single resource pool, especially for multi-tenant SaaS applications.

Azure SQL Data Warehouse, announced at Microsoft’s Build 2015 conference, is SQL Server’s second cousin in the cloud. It is essentially a cloud version of SQL Server PDW, a distributed data warehouse that can be used for large-scale data computation and analysis. The core technology of SQL Data Warehouse, in addition to distributed scheduling, is column storage index running on SSD storage. Compared with the main rival AWS RedShift, Azure SQL Data Warehouse benefits from the concept of Azure storage and computing separation, not only can dynamically adjust the computing capacity, or even “shutdown”, Temporarily turning off computing power and leaving only storage — a killer feature that can save a lot of money in many data warehouse scenarios and is the reason many customers choose Azure.

Of course, sometimes you still need SQL Server to run on Azure as a VM to overcome the limitations of some PAAS-style cloud databases, such as using SQL Agent or installing other supporting software on the VM. Azure also provides a convenient SQL Server VIRTUAL machine for this situation, out of the box, and the database license fee is calculated based on the time of use.

As another form of embracing cloud computing, SQL Server also attaches great importance to cloud interoperability, taking the cloud as its own extension and integrating into the cloud data platform system. First of all, since version 2016, SQL Server has started to support the use of cloud as the extension of the Database, with The Stretch Database function can automatically and seamlessly upload the cold data in the local Database to the Azure cloud, while the front-end query does not need to do any adjustment. Second, SQL Server can also be seamlessly integrated into cloud-based big Data solutions such as Microsoft’s own Azure Data Lake or HDP based HDInsight powered by Hortonworks with the help of the growing PolyBase technology.

Prospects and Prospects

SQL Server is now in its prime, not the younger generation that was considered “only suitable for small and medium enterprises”. By acquiring large benchmark clients such as NASDAQ and NTT DoCoMo, SQL Server is on the offensive and has no fear of any competitors. This is echoed in Gartner’s recent Magic Quadrant report: Microsoft is already in the leadership quadrant in areas as diverse as operational databases, analytical data management solutions, and business intelligence.

Gartner Operational Database (ODBMS) Magic Quadrant Map

Just one year after SQL Server 2016, SQL Server 2017 has been released, and Microsoft has significantly accelerated the pace of updates. Linux support for SQL Server 2017 has every reason to go one step further. This is the choice of technology platform, but also the innovation of business strategy. Unbundled from Windows, SQL Server is about to open up a new market.

In terms of usage scenarios, as described above, SQL Server can now handle high frequency OLTP loads, be used as an OLAP-oriented data warehouse, and be built as a real-time data processing component based on memory. Or as a driving engine for application intelligence — SQL Server has become very “all-powerful”.

In addition, it is worth mentioning that Microsoft chose a “simple and integrated” design philosophy and business model, compared with the segmented product strategy and dazzling product list of competitors such as Oracle and SAP. An SQL Server Enterprise Edition includes column storage, in-memory database, and other features that may need to be purchased separately from competing products, making the business level more convenient and simplifying the system architecture. From a versioning perspective, Microsoft has also made a smart move, moving most of the important features into Standard Edition and Express Edition, and moving to a major version breakdown by data level to allow advanced features to reach more potential customers. In addition, Microsoft has specially launched a free downloadable Developer Edition, which is exactly the same as the highest level Enterprise Edition in terms of functionality and data level, except that it cannot be used in production environments. Lower the barrier for developers to experience and try out enterprise-level features.

A hot topic may be the comparison with open source relational databases such as MySQL and PostgreSQL, as well as various SQL on Hadoop databases. Objectively speaking, the wave of open source does have a huge impact on commercial database. For direct cost reasons, many enterprises represented by the Internet industry have adopted open source solutions. However, with the deepening of people’s understanding of open source system and operation mode over the years, especially after the accumulation of a lot of practical experience, the market gradually becomes more rational and begins to view the advantages and disadvantages of open source mode and application scenarios more objectively. After all, there is no free lunch. When the core business of an enterprise runs heavily on open source software, it may also need to purchase commercial support services that are not cheap, or hire a large number of senior personnel to understand and control the underlying source level details. As a result, many enterprises will prefer to go straight to a commercial database with better performance, higher stability, and richer enterprise features — because for these enterprises, SQL Server has a lower total cost of ownership.

In terms of development environment and tool ecology, SQL Server also has a deep accumulation. SQL Server Management Studio has always been a well-known database Management tool that Microsoft continues to invest in. It brings powerful and easy-to-use graphical Management interface for SQL Server, which can be downloaded for free. In all versions of Visual Studio, it is also possible to connect to and query SQL Server with the development process through THE Server Explorer built into VS — a good example of the integration advantages that Microsoft can bring. It is also worth mentioning that in terms of programming language support, in addition to its own C#/.NET, the SQL Server team has significantly increased support for other popular languages in recent years, especially Java and PHP, bringing high-quality client class library support for these languages and platforms.

Finally, talk about SQL Server in the Chinese market. In fact, in China, thanks to the cultivation in the field of education and excellent product usability, SQL Server has always had a good mass base and friendly image. However, in order to open up a situation in China’s high-end market, we still need to further build the brand, build benchmark cases, eliminate the old impression of “suitable for small and medium-sized enterprises” left by some people before. To achieve this, in addition to the product itself needs to constantly improve to meet the demanding needs of big customers, we must also pay attention to the construction of local technical community, and help employees to obtain good career development, forming a virtuous circle. In addition, with the landing and development of Azure cloud in China, SQL Server family can also take advantage of the development of public cloud to promote and share the growth dividend.

Write in the last

Relational database has a long history and still plays an irreplaceable role in many scenarios. SQL Server, as the leader in the relational database, goes up all the way and is constantly glowing with vitality and vitality. With the release of SQL Server 2017, Microsoft will be able to further consolidate its market position and launch a strong assault on new markets by supporting Linux as a killer app.

In fact, SQL Server features many, far from exhaustive article. Due to space limitation, we did not mention enterprise-class features such as AlwaysOn and AlwaysEncrypted. Business intelligence suites such as SQL Server Integration Service, Analysis Service, and Reporting Service are also not covered in detail. However, through the above analysis, we have a clear understanding of the outline and development strategy of this huge investment in business relational database in the new era. If we use eight words to summarize, “inclusive, open outreach” is the best interpretation.

Which brings us to the final question: whether you’re in the traditional industry or the consumer Internet, Linux or Windows, will your next key application be SQL Server?

The authors introduce

Kaiduo He is senior Director of Technology at China Double Technology (Nasdaq:GSUM). He graduated from Tsinghua University and worked in the infrastructure department of Morgan Stanley. He has been working with China Double Technology since 2011. Over the years, I have participated in the architecture and design of many big data solutions for digital marketing and social listening in China. Technical areas of personal interest include. Net ecosystem, cloud computing, big data technology stack, etc., and has written and published a companion article titled “Analyzing the Evolution of Microsoft’s Technology Ecosystem from Visual Studio 2017”.

read

  1. Blogs.technet.microsoft.com/dataplatfor…
  2. www.infoq.com/news/2012/1…
  3. Blogs.technet.microsoft.com/dataplatfor…
  4. Blogs.technet.microsoft.com/dataplatfor…
  5. Hub.docker.com/r/microsoft…
  6. Github.com/gridsum/dat…
  7. Blogs.technet.microsoft.com/dataplatfor…
  8. www.zdnet.com/article/mic…
  9. Blogs.technet.microsoft.com/dataplatfor…
  10. www.brentozar.com/archive/201…
  11. www.intel.com/content/www…
  12. Info.microsoft.com/CO-SQL-CNTN…

Thanks to Guo Lei for correcting this article.