The original address www.cnblogs.com/Dodge/artic…

A previous ideal

Since Ken ThompsonKen Thompson and Dennis RitchieDennis Ritchie developed the prototype of UNIX system in Bell Telephone Laboratories in the United States in 1969, It has been nearly 30 years. The word “UNIX”, which has no dictionary meaning, is actually a homonym for the sheer size of the MULTICS (MULTiplexed Information and Computing System) operating System.

In October 1957, the Soviet Union launched Sputnik, prompting President Dwight D. Eisenhower to invest billions of dollars in science, ARPA (Advanced Research Projects Agency) was established in this time and space. It was responsible for promoting system development and other related programs, and became an important promoter of the development of electronic calculators in the United States at that time.

In the 1960s, the development of mainframe computers, the Massachusetts Institute of Technology was the first to implement CTSS (Compatible time-sharing System), in the field of electronic calculators enjoyed a very lofty position. In 1963, J. C. R. Licklider (1915-1990) of THE Massachusetts Institute of Technology promoted the MAC project, which was based on IBM mainframe computers and connected nearly 160 terminals scattered around school districts and faculty homes. 30 users can share computer resources at the same time. By 1965 the project had become overwhelmed, so MIT decided to develop a larger time-sharing computer system. The new plan is MULTICS. A time-sharing computer system, the largest in the history of computing, is intended to connect 1000 terminals and support 300 users online at the same time. She is faced with the dual challenge that the time-sharing concept of operating system is still being explored and formed in various academic and research institutions, and the computer hardware needs to be redesigned.

At the time, MIT had approached IBM to cooperate with the MULTICS project, but IBM was too busy dealing with its own problems to cooperate with the MULTICS project. At this time, General Electric Company, also known as Strange, was developing its own mainframe and, seeing no opportunity, invited MIT to help develop specifications for their GE 645 mainframe. With strange enthusiasm for computer hardware, MIT turned to Bell Telephone Laboratories, which could not sell computers but was full of talent, to do the software project. Thus, in 1965, the MULTICS project began to be developed by three members, MIT, Strange, and Bell Telephone Laboratories.

In 1969, after four years of struggle, the MULTICS project still failed to reach the original design ideal, and Bell Telephone Laboratory decided to quit the project. The less functional MULTICS were installed on a GE 645 mainframe computer for USE at MIT. Less than a year after the program ended, Strange disappeared from the mainframe computer market entirely. Later, the MULTICS plan was derided as Many cavil Large tables In Core Simultaneously.

Farmer: Personally, I think the MULTICS project was born in 1965, when mainframe computers were beginning to take off, and died in 1969, when mainframe computers were at their peak. Had she succeeded in time in the late 1960s, she would have encouraged what was then widely regarded by computer pundits as the ideal “computer utility,” or at least prevented the rapid decline in the development and resource concentration of mainframe computers in the early 1970s. The MULTICS project, if successful, would have increased the size of mainframe computers by at least 10 times. However, the MULTICS project failed. She dealt a severe blow to the confidence of computer utilities, which relied on mainframe computers at the time. Without a similar plan to follow, centralized mainframe computers have not seen significant improvements in efficiency, accelerating the transformation of the calculator industry in search of a new path. On the other hand, the failure experience of MULTICS program also gave the software engineers involved in the program valuable experience and positive influence.

A few years later, at AT&T, the MULTICS program was a spectacular failure for a spectacular success. A joke on her name was born…. UNIX.

The beginning of a game

When members of the Computing Science Research Center at Bell LABS quit the MULTICS program in 1969, bell LABS did not have a fully developed and convenient conversational calculator service environment. While many of these engineers were working to improve the programming environment, Ken Thompson, Dennis Ritchie, and colleagues were drafting a new file system architecture, a precursor to the early UNIX operating system file system. At the time, Ken Thompson was busy using Fortran to transfer game called “Space Travel” from Multics to GECOS. CPU Time for GECOS System mainpcs was expensive ($75 a second) and controlling “Spaceship” was not so effective, so Ken Thompson had to find an alternative development environment. Thompson had his eye on a little-used Digital Equipment Corporation PDP-7 minicomputer, which at the time used a graphic-II display and had decent graphics capabilities. Ken Thompson then worked with Dennis Ritchie to transfer programming to the PDP-7. In order to get a better development environment while changing the working environment, Ken Thompson and Dennis Ritchie jointly designed a set of operating System including File System, Process Subsystem and a small group of Utility. At the time, the system only supported two users. Brian Kernighan jokingly called the new operating System UNiplexed Information and Computing System, as bell LABS was still smarting from the MULTICS failure. It was abbreviated as UNICS, and was later called “UNIX” by a homonym. I did not expect that this funny name would be called today.

Initial free development

In fact, the “UNIX” system was only used privately and received little attention until a formal plan was made in 1971.

Pdp-11 in 1970, bell LABS ‘Patent Department, lacking a word-processing system, bought a PDP-11 computer for design and development purposes. The delivery of the PDP-11 was not smooth, with the processor arriving first and the hard disk waiting months. When the PDP-11 was ready, they ported UNIX to a PDP-11/20 computer with a 512K bytes hard disk and developed a word-processing tool underneath it. This tool was the precursor to nROFF/TROFF. UNIX at the time provided 16K bytes for the system, 8K bytes for the user program, and the maximum limit for files was 64K bytes. The system, which included a word-processing tool, was officially adopted by the patent department of Bell LABS, under the name “First Edition”. Thompson added Fortran Compiler in B language after UNIX transplantation was successful. However, since B language is a interpretive language, the execution effect is not very good. Ritchie then developed it — Compiler — into a language that generates machine code and allows data morphology and structure to be defined. Ritchie called it C. In 1973, he rewrote all the original UNIX programs in C language, and the first official version of UNIX appeared –V5 (fifth edition).

UNIX was slowly spreading through Bell LABS, and there were as many as 25 units installed. Since Bell LABS at the time was effectively owned by AT&T and its subsidiary, Western Electric, the lab was responsible for developing and improving telecommunications equipment that Western Electric made and that AT&T used in bell systems. At the same time, under military contracts, research and improvement work related to defense. AT&T itself could not engage in any computer sales due to the restrictions of the antitrust law, so the executives of AT&T did not give much support to the development of UNIX at that time. Therefore, bell LABS did not care much about the development of UNIX at that time and had no intention to promote it. However, in order to cope with the increasing number of UNIX users and related technical support needs of various departments in the laboratory, the UNIX System Group (USG for short) was established. But the group has only provided technical support, not a mandate for further development. So the development of UNIX at that time was all due to the efforts of AT&T engineers. The development of UNIX during this period was completely disorganized and unsystematic, and the players were engineers, thus setting the stage for UNIX’s future to be less acceptable to the general public.

Out of Bell LABS

In 1974, Thompson and Ritchie jointly published a UNIX paper “UNIX Time-sharing System” in Communications of the ACM, which received considerable response. In 1975, UNIX released its sixth version (V6), which provided more powerful functions than the expensive large computer operating system at that time. Its biggest feature is written in high-level language, and it can be ported to different computer platforms with only a few program modifications. UC Berkeley began developing UNIX V6, with the full program source code, from within Bell LABS to universities and research institutions in 1976. In 1977, BSD (1st Berkeley Software Distribution) was released as the first version of the UNIX OS. The subsequent development of THE UNIX OS contributed a lot and had a profound impact, which will be explained later. In the same year, UNIX was widely adopted by telephone companies because it provided a good program development environment, network transmission Services and real-time Services. Interactive System Corporation was the first company to use UNIX operating System because of Value Added Reseller (VAR) to enhance the office automation environment. This year UNIX was also modified and installed for the first time on Interdata 8/32 computers. It was also the first time the UNIX operating system was installed on a non-PDP-type computer. Since then, UNIX systems have been ported to various microprocessors and new computers

A rock of stability

In 1978, UNIX released the most significant UNIX time-sharing System, Seventh Edition, also known as V7. This version includes Fortran 77 Compiler, Shell (Bourne Shell only), file processing tools (Nroff/Troff, ROff, MS Mocro, etc.), UNIX-to-UNIX-file-copy (used to support two UniXes Table transfer between machines), data processing tool (AWK, SED and other powerful tools), debugging tool (ADB), program development tool (MAKE), Lexical Analyzer generator (LEX, YACC, etc.), simple drawing tool, and support FOR C language and LINT Verifier is designed to execute on PDP-11 and Interdata 8/32 computers. At that time, the architecture and function of the system were quite complete. The original author of the Bourne Shell called it “improvement over all preceding and following Unices”, and today the version is known as “Last True Unix”. This shows that V7 has played a very important role in the development of UNIX.

At the time, Digital introduced a 32-bit Supermini called VAX, which was paired with an operating system called VMS. The minicomputo’s hardware was impeccable (to this day it is still praised by older systems administrators for its stability), but DEC’s support for the VMS operating system led Bell LABS engineers to prefer UNIX. This work was done by John Reiser and Tom London. They moved UNIX OS to VAX computers based on V7. This version is called UNIX V32. For ease of transfer, they treated the 32-bit VAX as a larger PDP-11(since the DIGITAL PDF-11 was 16-bit), and for efficiency, V32 discards the paging function provided by the VAX hardware. (The VMS OS of the Digital device supports paging and the V32 discards this function. Therefore, V32 does not support virtual memory.) Even so, the V32 supports addresses up to 4Gb. Thus V32, which does not support paging, began to be widely installed on VAX machines.

DEC introduced its OWN UNIX OS, called ULTRIX, around 1984.

An important continuation and development – BSD UNIX

Back in November 1973, Ken Thompson and Dennis Ritchie gave a symposium on operating system principles at Purdue University in Indiana. Sitting in the room was a U.C. Berkeley professor named Bob Fabry. K&R’s UNIX announcement that day immediately caught Bob Fabry’s eye. At that time, Berkeley was still in the stage of using mainframe computers, batch execution of programs, and there was no such conversational environment as UNIX. After the meeting, he decided to bring UNIX back to Berkeley.

So Berkeley’s computer science, mathematics and Statistics departments bought a PDP-11/45, ready for UNIX. In January 1974, Bell Labs sent a tape of V4, and student Keith Standiford started installing V4. Standiford ran into problems with the installation and turned to Bell Labs for help. Thompson, N.J., used a 300-baud modem in Berkeley to debug online.

This was Bell Labs’ first contact with Berkeley in the history of UNIX.

After debugging, the V4 was working smoothly on Berkeley’s new PDP-11/45 computer. At the time, it was a joint purchase of three departments, and calculator Science had managed to get UNIX, but the Math and Statistics department had to use DEC’s RSTS system, so after some coordination, UNIX and DEC’s RSTS system were assigned 8:16 hours. Used in rotation among the three departments. After a while, the performance of UNIX with conversational features was so popular that most students switched their plans to UNIX time. Batch processing, which can take up 16 hours a day, is ignored.

At that time, Professors Eugene Wong and Michael Stonebraker, attracted by the convenience UNIX offered, decided to move their INGRES database project from the original batch computer environment to a UNIX system. In 1974, they added a new PDP-11/40 computer to the executive program, with V5 installed. The project was Berkeley’s first to move its operating environment to UNIX. Demand for UNIX operating environments is growing rapidly in Berkeley. To cope with the demand, Michael Stonebraker and Bob Fabry decided to apply for two more PDP-11/45s. In early 1975, DEC introduced the PDP-11/70, which cost about the same as two PDP-11/45s, but was more powerful than the PDP-11/45s, so they decided to buy one instead.

The machine led to Ken Thompson, Bill Joy, and 1BSD. It’s like a landmark in UNIX history, built in Berkeley from Bell Labs, building on the past and creating new things. The farmer personally thinks she should be in a museum.

When the machine arrived in Berkeley in late 1975; At the same time, Thompson was invited back to his Alma mater, Berkeley, as a visiting professor on UNIX. Thompson worked with Jeff Schriebman and Bob Kridle to install the new VERSION of the V6 on the PDP-11/70 while in school.

In 1975, a graduate of the University of Michigan came to Berkeley. His name was Bill Joy. Joy and fellow student Chuck Haley (who wrote Tar) liked to hang out in the computer room together, and Thompson often got in on the act. They succeeded in improving Pascal’s power of interpretation and error detection, as well as speed of interpretation and execution. In addition, after installing the ADM-3 screen, they felt that the Ed text editing command was not useful; So based on another similar EM directive, I developed my own satisfactory text editing tool, namely the ex directive.

In the summer of 1976, Thompson returned to Bell Labs from his sabbatical. Joy and Haley were already exploring UNIX Kernal, even making some changes. In early 1977, Joy made a tape titled “Berkeley Software Distribution.” This was 1BSD. It includes the new Pascal Compiler and ex editor.

The following year, several new screens were introduced — ADM-3A, which supported cursor address display. Joy had done something on this screen that people loved. Vi, a text editor that some hate. It wasn’t long before Joy realized that the old screen equipment would still be used on other computers. For ease of support, Joy has designed an interface for managing and supporting different screens. This interface is now known as TERmCap. In 1978, the “Second Berkeley Software Distribution,” or 2BSD, which included enhanced Pascal and VI and TermCap, quickly replaced the original version. In 1979, there were at least 75 PDP-11 machines with 2BSD in operation. Since then, BSD versions executed on the DEC PDP-11 family have been identified as 2.XBSD. Due to the longevity of the PDP-11 computer, I still find PDP computer websites on the Internet today. They still seem to be working silently in some places today. 2. The last revision of xBSD was in 1987 with a 4.3BSD main architecture and a 2.10BSD version.

One of the most important features introduced in BSD UNIX was the command vi, which is still loved and hated to this day. I have met a lot of people learning UNIX OS, most of them are not easy to use and master vi, and there are also a lot of people who hate this command. A few days ago, I saw a website discussing whether VI hinders the development of UNIX. That’s a bit of an exaggeration!

Bill Joy has said publicly many times that he would have preferred not to have written VI if he had known how “popular” it would be. Bill Joy said that he wanted to add Multiple Windows in VI, but when he was writing this part of the program, the tape drive crashed, so he had to work without backup. Halfway through the program, the hard drive he was using died. With nothing to save and no backup tapes, Bill announced that he would not add Multiple Windows to VI. Bill later wrote instructions for the previous version of VI and moved on to other things. So VI grew up to be what he is today. Farmer I think it may be a blessing rather than a curse! If the Multiple Windows feature had been published in the first place, the image above would have been a dead one.

There was a professor, Richard Fateman, who was working on his Macsyma project on a PDP-10. But he needed a larger memory address to execute the program, so in early 1978 he took a fancy to digidor’s then newly released VAX-11/780. He managed to scrape together enough money to buy the VAX with the help of other affiliates. Initially, the machine was installed with the VMS operating system. But other members were running UNIX, so Fateman installed V32. The problem was that V32 did not support virtual memory, so Fateman approached Professor Domenico Ferrari to ask him and his team to add it to UNIX. One of the students, Ozalp Babaoglu, came up with a solution that seemed possible, but because it involved VAX hardware and UNIX Kernal, he turned to Joy for help. They fought hard with only one VAX. V32 passed into history in January 1979 when a version of UNIX that supported virtual memory on the VAX finally came out. Peter Kessler and Marshall Kirk McKusick followed with Pascal; Joy transferred ex, VI, C shell and other tools from 2BSD. This version is 3BSD. A UNIX OS that supports virtual memory, Demand Paging, and Page replacement for the first time.

 

UNIX meets DARPA

In the late 1970s, The Defense Advanced Research Projects Agency, or DARPA, is working on a project for Artificial Intelligence, VLSI and Computer vision research to find a common operating computer environment. The first choice for hardware was Digidor’s VAX mainframe. The VMS operating system is used. Such a combination was considered a priority because it had functionality that was fairly close to DARPA’s requirements, but after DARPA talked to DEC about supporting VMS, DARPA did not get a satisfactory answer. This forced them to consider moving toward UNIX. But the biggest drawback of the UNIX OS(32V) with the VAX was that it did not support virtual memory; But by this time someone had overcome it.

At the time, Professor Bob Fabry wrote a proposal to DARPA, suggesting that they build on Berkeley’s 3BSD, which supports virtual memory, as required by the program. The proposal was of great interest to DARPA. Subsequently, 3BSD actually received a good rating from DARPA members of the relevant program, and Berkeley beat Carnegie-Mellon university and BBN(Bolt Baranek & Newman, Inc.), Bob Fabry won a DARPA grant. The contract began in April 1980 for 18 months. Since then DARPA has used UNIX OS as the standard operating system. After winning the DARPA contract, Professor Bob Fabry established a supporting organization, Computer Systems Research Group or CSRG. Bob Fabry hired Bill Joy to do software development. Joy quickly built on the previous 3BSD and integrated new functionality. Examples include Job Contro L (by Jim Kulp), Auto Reboot, and 1K Block File System. It is also integrated into Pascal Compiler, Franz Lisp System and enhanced Mail handling System. This was 4BSD, published in 1980. Soon she was installed on nearly 500 VAX’s.

DARPA adopted this version as DARPA’s standard UNIX operating system at the time.

At that time, a fellow at Stanford Research Institute named David Kashtan wrote an evaluation of the performance of VMS versus BSD UNIX on VAX. The report notes that BSD UNIX is not as efficient as VMS. Joy learned about this and spent less than a week retuning UNIX Kernal. They also wrote a report proving that their BSD was significantly superior to VMS on VAX. In June 1981, Joy’s tweaked system, along with Auto Configuration by Robert Elz, was released as a 4.1BSD version.

DARPA was so pleased with Berkeley’s 4.1BSD performance that it renewed the new Testament for two years at five times the previous contract. Half of the money went to Berkeley to continue developing BSD UNIX. The relative price of having more money is being more demanding. At the time, DARPA had clear goals for what UNIX was supposed to be; Faster and more efficient file system, multi-gigabyte executable address support, flexible interpretation and communication capabilities, and integrated support network capabilities. At the same time, a steering committee established by DARPA to meet the program’s goals; The main members were Bob Fabry from Berkeley, Bill Joy, Sam Leffler, Alan Nemeth and Rob Gurwitz of BBN, Dennis Ritchie of Bell LABS, Keith Lantz of Stanford University, and Carnegie Carnegie. Rick Rashid of Mellon University, Bert Halstead of THE Massachusetts Institute of Technology, Dan Lynch of the Association for Information Science, Duane Adams and Bob Baker of DARPA, California. Jerry Popek of the University of Los Angeles.

Joy soon began to integrate the TCP/IP Protocols published earlier by BBN’s Rob Gurwitz, but he was not satisfied with the efficiency of BBN’s programs, so Joy and Sam Leffler wrote a new version of their own. In addition, and added some support network tools RCP, RSH, Rlogin, RWHO. They called her 4.1aBSD, and this version was not officially published and became available for internal use in April 1982. Still, before 4.2BSD was officially released, she was breeding everywhere. In June, 4.1aBSD Kernal added the newly completed filing system, version updated to 4.1bBSD.

RCP, RSH, Rlogin, rWHO. For security reasons, the command group is gradually replaced by a new command group called Secure Shell (SSH). SHH website (HTTP: //www.ssh.org).

In the late spring of 1982, Bill Joy, tired of his Berkeley environment, agreed to join Sun Microsystems, Inc., the company he had just started, as Sun’s fourth founder. He spent the summer of that year on the road. Leffler took over his work after he revised the flexible interpretation communication mechanism and rewrote UNIX Kernal into a paragraph. Because of the length of the contract, Leffler published 4.1cBSD in April 1983 for trial use by members of DARPA’s various programs. In June, DARPA’s steering committee met for the second time to review the latest version of BSD. Leffler continued to integrate UNIX systems and, in August 1983, released 4.2BSD. She met DARPA’s predetermined needs; High speed file system and extended enhanced virtual memory capability for CAD/CAM image processing and AI research; Provide a decentralized interpretation communication mechanism; Support 56-kbit ARPA Internet connection and 10-Mbit/s Ethernet LAN connection; There is also the modular Kernal Code, which has been restructured to provide a more efficient computer platform migration.

SUN produces RISC based workstation computers using bSD-based UNIX OS. At the time, the UNIX OS, which was as multi-tasking and networked as the mainframe computer, and cheap hardware (compared with the Mini) were widely favored by the engineering community, and the fate of the Mini computer was doomed to decline. Because of the network, the application of computer software also began to develop towards the Client-Server architecture.

In 1982, SUN had its own operating system, SunOS 1.0, which was inherited from 4.1BSD. It wasn’t until November 1990, when SunOS 4.1.1 was released alongside Solaris 1.0, that SUN began to move toward System V. SunOS 4.1.1 is a BSD-centric UNIX hybrid with System V tools. But this is a transitional approach for business reasons (which will be explained later). The term SunOS 4.1.x only lasted until SunOS 4.1.4 in 1994, which was followed by Solaris 1.3. The version of Solaris that really continues to this day was Solaris 2.0(SUN OS 5.0), which began in July 1992.

The commercially successful SUN Microsystems did make some significant contributions to the development of the UNIX OS; For example, NFS(Network File System) was published in 1984 and PC-NFS was later published in 1986.

A bumpy road to Commercialization – the UNIX version wars

Perhaps the most obvious fact is that UNIX commercialization essentially means the creation of separate versions of UNIX. This is not surprising if the interests of uniqueness and exclusivity are considered. So UNIX began to spawn quite a few versions. This phenomenon has created a degree of confusion for both users and vendors developing applications. However, a sense of helplessness has only just begun.

On January 1, 1984, AT&T, a $149.5 billion behemoth with 1,009,000 employees, Finally, they were divided into seven Regional Bell Operating Companies (RBOCs) by Harold H. Greene. AT&T broke up overnight as a regional network and lost its monopoly on long-distance telephony. This shift in time and space led to a 180 degree change in AT&T’s attitude towards UNIX (actually, farmer I mean the attitude towards charging).

It has been mentioned previously in the early 1970s that AT&T had an absolute monopoly advantage in the long-distance telephone market, so it was restricted by the American government not to set foot in and engage in computer and other industries, which also created the free and open development of UNIX in the early stage. It was not until 1979 that AT&T announced plans to commercialize UNIX. In November 1981, AT&T’s USG released System III. It was updated to System IV the following year. Later in 1983, AT&T merged CRG and USG to form UNIX System Development Lab. Often referred to simply as USL, it’s not hard to tell from its name what role she will play. That was the year the System V came out. At this point, AT&T realized that it was uneconomical to spend a lot of advertising money on each version update, so it decided not to change the name after System V. In 1984, System V Release 2 was released, called SVR2 for short. Finally seeing Virtual memory functionality from the BSD version in this release, I had to marvel at AT&T’s robustness. SVR3 was released in 1986, followed by SVR3.2 in 1987.

In 1987, SUN, which already had a foothold in the workstation market, approached AT&T to merge the two major versions, System V and BSD. In early 1988, the two companies signed a partnership that gave AT&T a seat on SUN’s board of directors and the right to buy 20 percent of SUN. The project was an opportunity to integrate the then-fragmented UNIX OS. But that’s the ideal. In fact, the plan alarmed other members of the UNIX community, especially industry leaders such as IBM, DEC, and HP. To resist the action, they formed a coalition of opposition. So the Open Software Foundation, or OSF, was born in 1988; In addition to the former three giants, there are as many as 30 computer hardware manufacturers and system consulting companies, have also taken action to join the ranks of this opposition. However, AT&T and SUN also organized UNIX International, also known as UNIX International, which has no more members than OSF, but if it is Intel, Toshiba, Unisys, Motorola, Fujitsu, These big guys, that’s something to watch, too.

In the real world, the interests of the corporation always took precedence over the interests of the individual, so the two camps never agreed on anything again, and even the unified UNIX specification that was developed at the time was never, strictly speaking, implemented. In fact, such conflicts and contradictions in the interests of enterprises also exist between different members of the same camp. The confrontation between the two camps was arguably the most significant industrial conflict in UNIX history. Political considerations of commercial interest trumped technical ones, and thus sealed the fate of UNIX’s continued fragmentation. AT&T released SVR4 in 1989, and SUN moved closer to SVR4 later by branding her SunOS 4.1.1 under the Solaris name. OSF released OSF/1 in 1990. The UNIX version of the problem is thus more confusing. But interestingly and ridiculously, the concept of Open System, which is touted by both sides, has caused a reaction in the computer industry, which is unexpected.

Soon AT&T pulled its investment in SUN, and members of the same camp split up. USL officially became an independent commercial company in 1991. But the value of UNIX in the commercial market has changed…

Networking Release 2 sets UNIX free

Since UNIX came out of Bell LABS, research institutions and academia have played the dual role of inheritance and development. From 1979 to 1984, AT&T, the owner of UNIX, was generous in its licensing policies towards academia. At the same time to do some degree of funding and cooperation with the academic community. The study of UNIX, a time-sharing operating system, became fashionable, even fashionable, in academia, thanks to AT&T’s generous licensing and sharing of program raw code. Among them, such as Berkeley BSD’s contribution to UNIX, is an open fact. Early BSD users, however, had to pay a license fee to AT&T. This is not surprising from the point of view of industry funding academia. Because the purpose of financial aid is to achieve results. So everything that was built on AT&T’s original code was owned by AT&T. Thus AT&T took ownership of UNIX. After 1984, AT&T became more aggressive in protecting UNIX primitives; AT&T even required university users to sign confidentiality treaties in order to prevent the flow of UNIX raw code out of academic institutions to commercial interests.

In the process of DARPA funding Berkeley’s development of BSD OS, TCP/IP was born, a communication protocol that greatly affects today’s computers and the Internet. AT&T does not own the TCP/IP source code or the program’s copyright, because DARPA has an explicit rule that recipients of funding for software projects must unconditionally release the program’s source code. This makes a lot of sense today. With these conditions, Berkeley’S CSRG(Computer System Research Group) released Networking Release 1 in June 1989, based on BSD Vendors’ requirements. It included the TCP/IP Source code and tools for PC manufacturers who were just getting started. Networking Release 1 licenses cost just $1,000 and don’t require a commercial license from T&T. Instead, the University of Berkeley has an open license.

Farmer: I think Berkeley licensing is almost a kind of conscience licensing, she has no restrictions on the use of substance. It allows the original code or executable to be modified under any circumstances and allows the modified program to be commercialized without any feedback, although it does not absolutely require the developer to release the original code. If you sell it the same way, she doesn’t mind. One inviolable restriction, however, is that Berkeley’s contribution must be mentioned in the derivative’s copyright notices. This practice in the future, there is no much change, and this way of authorization has become the spirit of Authorization Berkeley.

The response Keith Bostic got with Networking Release 1 was much better than the CSRG staff expected. This wasn’t such a bad result that the CSRG in Berkeley felt the need to release more BSD source code. That inspired Keith Bostic, a member of CSRG, to start organizing volunteers to work on an earth-moving, if not breathtaking, program writing project. The main purpose of the plan was something of a trowel at the time. Farmer I personally like to call her the Free UNIX Project.

Marshall Kirk McKusick’s project was roughly split into two parts, operating system tools (Utility) and core (Kernal). And participants had to write the program without reference to the AT&T UNIX source code. Because only under these conditions can you write code that is free of AT&T’s copyright. It’s certainly not an easy task. Keith Bostic ran around and organized more than 400 dedicated software engineers, and it took 18 months to rewrite the operating system’s main tools and link libraries. Marshall Kirk McKusick was responsible for rewriting the core programming of the time. At the heart of the system, however, Berkeley and AT&T have long shared UNIX raw code with each other, so the code added by each has been mixed up. To get to the bottom of what each side had written, they decided to go line by line. First spent several months of time, the core program every line every file to establish conversion and comparison of the database. I then proceeded to remove the program code from the AT&T 32V and rewrite them. Even so, there were still six programs that they couldn’t do enough to completely rewrite the core program. In the end, they decided to publish everything they had done. The authorization method follows that of Networking Release 1, and the authorized tape is still $1000. This version was Networking Release 2, also known as 4.3BSD Net/2. It was published in June 1991. Although this is an incomplete operating system. Today, however, it is an epochal milestone: THE UNIX OS is free.

Who is “Big Brother” — Tort action

AT&T’s USL officially became a company in 1991. This means, of course, that she will place greater emphasis on UNIX’s commercial interests. At the time, UNIX OS dominated the advanced computer market; From Cray supercomputers to IBM mainframes to minicomputers to workstations, UNIX ruled the roost (something that hasn’t changed much in the 21st century). Even personal computers, which took off after the mid-1980s, were derided as toy computers, but there were several commercial versions, such as XENIX and Interactive UNIX, that paid taxes to AT&T. UNIX is a cash cow for AT&T.

But with the advent of Networking Release 2(later abbreviated as Net/2), all this changed!

First, an I386 processor player named Bill Jolitz got his hands on Net/2 and quickly made up for what Was missing from Net/2 Kernal. BSD Kernal is now done. When Bill Jolitz put them on the Internet to share his source code with others, the response was positive. Since the release was for personal computers with i386 microprocessors, it was named 386BSD and was officially released in February 1992. This is the first fully featured, copyright-independent version of BSD. Bill Jolitz was the only Kernal defender at the time. After he left the project, subsequent BSD players continued the version, later branching off into FreeBSD and then branching off into NetBSD.

Another company that has integrated NET/2 is Berkeley Software Design, Incorporated, or BSDI for short. BSDI named their modified system BSD/386 because net/2’s copyright statement declared its source files legitimate and allowed users to engage in commercial activities of derivatives. They also packaged the results, advertised them as BSD/386 for $995, including the original code, and offered a toll-free service number for inquiries at “1-800-ITS-UNIX.” The date was circa January 1992. At the time, USL’s System V with Source code was about a hundred times more expensive than BSD/386. This has alarmed big Brother AT&T. And formally warned BSDI in writing of its trademark violations (phone numbers with the word Unix in them) and publicly declared that AT&T owned the Unix trademark. BSDI publicly attacked AT&T again with an AD stating that her business practices were completely legal. As expected, BSDI’s botulism led both sides to court hand in hand.

AT&T’s USL accused BSDI of stealing its UNIX source code and asked the judge to give him justice. At the hearing, BSDI pulled out the magic bullet it had already prepared; Legal files written by myself without any AT&T source code and net/2 source code from BSD license. The evidence is enough to make BSDI invincible, and the latter keeps BSDI out of the storm. BSDI’s dialectic was accepted by the judge. But At&T didn’t stop there. They shifted their focus to the BSD licensing of Net/2 and refiled their complaint against BSDI and the University of Berkeley. AT&T also filed a court injunction against ALL BSDI sales of the BSD/386. And so did the University of Berkeley.

Farmer I think, after all, AT&T is a for-profit enterprise and it is only natural that she should protect her business interests. Although Berkeley and AT&T had an unusual relationship in UNIX development, the commercial benefits were real. Corporate funding for academic research is mostly based on commercial considerations; I am sure that a few senior members of the academic community will not fail to understand this when seeking assistance, even if it may be unacceptable or unwilling to be accepted by the majority of the academic community. At any rate, the wake-up call revived the point.

As a defendant, the University of Berkeley had no choice but to face the relentless commercial litigation. But they also took issue with AT&T’s Systerm V copyright, because AT&T’s UNIX licensing claims made no mention of Berkeley’s contributions. So Berkeley countersued AT&T for violating the terms of the BSD license. Berkeley’s counterattack intensified the battle, with the lawsuit moving inconclusive from federal court in AT&T’s home state of New Jersey to court in California, where the university of Berkeley is based.

By 1993, with litigation still underway, AT&T had packaged up USL and was looking for a buyer for $100 million. AT&T eventually sold USL to Novell for $80 million. And the new buyer has duly entered the fray. But there was a glimmer of calm. The case ended in an out-of-court settlement in January 1994. The actual contents of the agreement are known only to the parties.

In terms of the outcome of the lawsuit, perhaps Berkeley and BSDI are on the winning side. But if UNIX is anything to go by, there may be no winner at all.

In June 1994, after the incident was over, CSRG in Berkeley released BSD 4.4 Lite. In this edition, there are 70 archives that cite a newly revised copyright notice that states the contributions of both AT&T and BSD, and explicitly gives the archives the right to freely distribute. But somehow, BSD 4.4 Lite, which should be able to publish in its entirety, still lacks three files. At that time, FARMER I was also happy to buy a BSD4.4-Lite CD-ROM Companion, including a disc, which I still feel a bit silly holding in my hand.

Novell, which knew the UNIX Source code and the UNIX trademark, handed over the UNIX trademark to X/ Open and developed an operating system called UNIXWave. The market reaction after the launch was not enthusiastic. Soon after, Novell joined with SCO, and UNIX changed hands for the second time in 1995, with SCO pledging continued support for UNIXWare.

Note: *1 Intel released the 4.77 MHz 8086 microprocessor in 1978. In 1980, Microsfot released a XENIX version of microprocessors based on V7. In 1982, Santa Cruz Operation, a software company founded in 1979, became Microsoft’s co-developer. Her company continued to work in the field to this day, known by its acronym SCO.

*2 Interactive IS/1 (dominated by V6). This version later evolved into the more familiar name Interactive UNIX. Later, Sun Microsystems was absorbed by the deep-pocketed Sun Microsystems for Solaris for X86 and is now gone.

*3 While I was revising this, the company BSDI had been merged by Wind River and renamed iXsystems. 2001/05/03

*4 May 4, 2001, Caldera International, Inc. Caldera, Inc., a new holding company, formally acquired SCO’s server software division and SCO Professional Services Division

 

The GNU Project — opens a new avenue

On September 27, 1983, Richard M. Stallman of the MIT Artificial Intelligence Lab (RMS) A message titled “New Unix Implementation” was posted on net. uniX-Wizards and Net. usoft newsgroups. This was the beginning of what is now known as the GNU Project. In the message, seen as a draft of the “GNU Manifesto,” RMS sets out its own ideas and aims for a “Free UNIX” operating system under the name GNU.

“If I like an application, I should share it with others who like it,” is RMS’s motto. This seems to have been the driving force behind his determination to run the GNU project. RMS wanted to write a free operating system. Can let everyone like the air free access and use. The main reasons for choosing a “UNIx-compatible” design are; RMS showed that UNIX was not his personal ideal of an operating system; He only reads some of the data, but does not use it (MIT uses the “ITS–Incompatible Timesharing System”); But he believes that the UNIX operating system has good essential characteristics. He believed it would be more acceptable if the GUN were UNIX compatible. So RMS defines GNU is Not Unix for GNU translations, following MIT’s tradition of using recursive acronyms.

In January 1984, RMS decided to leave MIT AI Lab, where he had been for more than ten years, in order to develop his dream. When he offered his resignation to his boss, Patrick Winston, Winston tried to dissuade him by saying, “Are you still going to quit?” . RMS unmoved: “Yes”. Winston, who clearly got the answer he expected, added the thought: “Do you want to keep your keys?” . So RMS began to focus on “unemployment” at his old employer. A man holed up in his old office, planning how to start his GNU project. But developing a new UNIX-compatible operating system, even for top computer companies with deep pockets and human resources, is by no means an easy task. After drawing up his “GNU Manifesto”, he officially called out to the world what he would do. The seeds fell to the ground.

The first program in the GNU project was the Emacs editor written by the lone RMS in September 1984. By early 1985, Emacs was in the usable stage. So RMS put her on an FTP server on a machine called Pre.ai.mit.edu, where she was freely available to Amonymous visitors. It wasn’t long before Emacs’ strong defense caught the attention of a few players, who could add new features or debug them themselves by attaching source code, and it quickly gained a strong response. As fame spread, people began to join the GNU project’s programming community. RMS is excited and delighted by this “not alone”.

The Internet was not very popular then. So many people who were interested in the Emacs program could not get it through the FTP channel, so when someone asked RMS how they could get it through another channel, the unemployed RMS could see a source of funds to support his continued struggle – the sale of “free software.”

A person, an independent individual, in order to carry out his ideas in reality, must first accept reality. Only by accepting it as a fact and implementing the path of ideas can we get a relatively stable starting point and beginning. – Network farmer.

Thinking, writing, the brain suddenly passed a feeling (so incidentally recorded in this place). In any case, RMS really started serving people in need for $150 a tape. Based on this Foundation, RMS founded the Free Software Foundation (later referred to as FSF) that year. What this meant for the GNU Project was that it had moved beyond the idea of personalization to the stage of group organization. RMS also copyrighted software for the GNU project. RMS used the word “copyleft” to describe her, which means the opposite of a copyright. This is also known as the GPL — General Purpose License. The seeds of the GNU project took root.

Expand from selling GNU free software to other related software and reference manuals, provide software technical support, and accept the donation of computer equipment and funds (donors enjoy a certain amount of tax reduction according to law), and train software talents for enterprises. The FSF has struggled to generate revenue but is still running short of cash. RMS itself does not pay salaries. Software engineers hired by the FSF are paid half of what the industry pays. But this by no means means that the GNU project’s software is half water. The GCC compiler is a free compiler for the GNU project that was released as beta 0.9 in March 1987. The latest version is 3.0. This compiler is arguably the cornerstone of free software writing today. GCC interprets machine code as reliably as, or even better than, commercial compiler products.

By the early 1990s, the GNU project had completed a substantial quantity and quality of system tools. These tools were widely used on UNIX systems on various workstations at the time. Despite this achievement, it is still not a complete operating system. They lack a “core program” of their own.

UNIX got bigger after 4.2BSD and Kernal started to cause some inconvenience and problems. Thus, another writing concept developed at that time — microkernal.

In 1985, Carnegie Mellon University (CMU) and 4.3BSD as the basis of development, split one into two, divided into micro kerNAL and single Server two parts. The name of the plan is “Mach”. The project was a technological precursor to microcore development. GNU had intended to adopt the work of the Mach project directly. But, this wait, from the 1980s to the early 1990s, after several discussions, they decided to use the microcore writing method, set up their own program, called “Hurd”. The project is still being fought, though Microkernal’s approach has made them suffer; Fortunately, beta versions of 0.2 and 0.3 have been released.

Until the 21st century, RMS is still working hard to cultivate his dream land. Although he does not think he has fully realized his “GNU Manifesto” yet; But his dedication to the idea has brought together a significant number of free software writers, and with the efforts of these people and communities, a new path has actually been opened, leading to a new world. Along the boulevard, under the shadow of trees already in full bloom, delicious fruit ripened like a gift for all. People call her Linux.

The focus of the new generation – Linux

In the mid-1990s, the Internet began to spread rapidly around the World due to the emergence of World Wide Web, HTML, a new type of application. Demand for Internet hosting exploded overnight. That’s when a free operating system, available for free and able to upgrade x86 computers to UniX-level hosts, began to catch the world’s attention. The media and computer engineers are racing to announce the focus of this new generation, which is called Linux.

Linus Benedict Torvalds of course, this set of media touted hot fried chicken, but not one person’s work, overnight. Linux is a UNIX-like OS whose copyright is completely unrelated to AT&T. The creator of the original core program was Finnish Linus Benedict Torvalds(who is still the core program’s maintainer today). Most of the system tools in the operating system come from the GNU project over the years under RMS, as well as software produced by other free software writing projects, such as X Windows, KDE, Gnome and other window interfaces. Since the main parts of the operating system are under the GPL copyright, there are a variety of installation packages on the market. The most well-known ones are RedHat, Slackware, SuSE, Debian GNU/Linux… . This operating system, therefore, is the collective effort of digital-free software writers. Such an operating system is what RMS has been trying to achieve for years: “Free UNIX.” So RMS herself always thought it would be better to change the name to “GNU/Linux”. For this reason, some people also call this operating system GNU/Linux.

Torvalds has been a computer nerd since he was a “keyboardist” for his grandfather in his early teens. In 1990, when he was a sophomore in the information department at the University of Helsinki, he took a course on “C and UNIX operating systems” and became obsessed with the UNIX operating system. That year, the University of Helsinki bought a VAX with Ultrix operating system. Sixteen terminals are connected for teachers and students to use. Limited computer resources can be painful to bear for a computer nerd. Torvalds began dreaming of “building” a UNIX system that would run on his own computer.

In January 1991, Torvalds bought a 386 DX33 PC (his third computer) on instalments, using a “student loan” plus last year’s Christmas bonus. The operating system he chose to install was Minix, well known in academia. After several struggles, the Minix OS that was ready to go failed in many ways to meet Torvalds’ needs, inspiring his desire to start over. So Torvalds gradually explored and wrote his own core program on his 386 DX33. His first release on the web was version 0.01 on September 17, 1991. It was a humble start, but thanks to Torvalds’ continuous maintenance and contributions from her online friends, the core program that she had written by herself gradually morphed into a “virtual team”.

The average computer user, however, needs an operating system that can be installed (farmer I like to call it “installation kit”), not a single operating system core. At that time, Manchester Computer Center (MCC) in The United Kingdom made an installation kit named MCC Imterin based on the 0.12 version of the core program. Then mounting kits mushroomed all over the place; For example, TAMU(Texas A&M University) version by Dave Safford in Texas, MJ version by Martin Junius and SLS(Softlanding Linux Sustem) version by Peter McDonald Such as the emergence of non-commercial installation kits. The Linux installation suite was created as the demand for installation increased

A new market for demand. This opportunity, so that non-commercial installation kits also began to appear in the commercial market. Slackware was probably the first commercial installation kit to appear. By now, there are countless commercial and non-commercial installation kits.

As the number of users soared, the version and functionality of the core program accelerated, but remained robust. On March 13, 1994, core Program 1.0 was officially released. The integration of features in its installation suite has raced to match that of the commercial UNIX OS at the time. The Linux OS now has hundreds of thousands of users. At that time, the University of Helsinki held a conference called “Linux Launch For the First time.” Torvalds became the pride of the Finnish people, and Linux OS shone like a “supernova” as it was born, under the solemn coverage of Finnish television and the media.

Early Linux core programs were pointed out by Andrew Tanenbaum that they were too tightly tied to x86 processors, so he believed that Linux core programs would not be portable to other processors. This is obviously very different from the portability of UNIX OS. That was certainly the case at the time, and it had more or less to do with the hardware resources Torvalds himself had. But as Linux’s user base expanded, it was voluntarily ported to different platforms. Dave Miller, for example, successfully ported Linux to SUN’s SPARC workstation with the same enthusiasm and learning spirit as Torvalds. In addition, Such as Amiga, Atari, PowerPc, MIPS R4000 have also seen the appearance of Linux. These transplants are strictly “cases” from a technical point of view. But it has piqued Torvalds’ interest. The port that really shook the core of Linux was the Alpha processor.

In May 1994, Digital engineer John Hall(aka Maddog) met Torvalds at the Digital Users association, and they hit it off. Maddog urged Torvalds to port Linux to the Alpha chip and offered to provide an Alpha computer for Torvalds to use for research. What was then the world’s fastest 64-bits Alpha chip was a proud achievement for DEC, whose architecture and capabilities were superior to those of Intel’s 32-bits processor of the same era. This technical challenge attracted Torvalds’ input. This port, however, is no small task for Linux’s core programs, which are based on x86 microprocessors. After nearly a year of hard work at Torvalds and DEC, the Linux core was successfully ported to the Alpha processor (which uses the same code as the x86 processor). In March 1995, version 1.2 of the Linux’95 core program was released, supporting Intel x86, DEC Alpha, SUN SPARC, MIPS, and more.

In June 1996, the core program version jumped directly from 1.3 to 2.0. Torvalds himself officially adopted a “penguin” as the logo for Linux. Meanwhile, computers with the Symmetric Multi-processing (SMP) architecture are being supported. Supported processors include Motorola 68K and PowerPc. Thanks to the efforts of the free software community and the support of the computer industry, Linux has come close to the capabilities of the commercial UNIX OS. Of course, Linux actually has a long way to go before it is “mature” and “stable.”

Today, the Linux virtual development community, spread around the world, continues to grow. How long will it last? History will tell. But for now, at least, an operating system that shares program code freely is the goal of RMS.

Note: *5 Minix is an operating system written by Professor Andrew Tanenbaum for teaching purposes. A good model for learning the basics of UNIX in the educational world.

The new civilization century is free and shared

At this point, this paragraph about the development of UNIX text, has been from the past history of the walk back to today… Today in the 21st century. This article is nearing its end. I beg your pardon that the farmer will conclude with his own historical feelings.

Reading and exploring history was a personal hobby of the farmer from his youth. Usually I can’t stand not knowing why I like things. So I will try to find out who created it, why it was born and developed. That’s why I wrote an article about UNIX, a strange word I couldn’t find in the English dictionary.

In the course of UNIX, however, I was surprised to discover something different from my quest for 20th-century history. I believe you should know that the 20th century has been one of the bloodiest and most brutal years in the history of civilization. In the meantime, the previous generation of most peoples suffered on an unprecedented scale. The philosopher Isaiah Berlin looked back at 20th-century feelings and said something like this.

“My life — I must say this — has gone through the twentieth century without any personal suffering. Yet I remember it as the most terrible century in western history.”

Indeed, every time I read about the 20th century, I feel doubly fortunate. I grew up in Taiwan, an island whose history can only be referred to as “sad land”, and her suffering is not completely over until today. Although most of the younger generation have forgotten where they came from and where they are going. As a Chinese, standing on a lonely island that still seems to be facing armed confrontation by compatriots…. I am not sure whether the wounds of history will be healed by the love of my fellowmen, or reopened by man’s violent and predatory nature……… Sorry to digress.

What I want to say is that in the Internet age of the late 20th century, I experienced a delightfully free-sharing civilization that was rooted in the heart and crossed existing boundaries. Compared to the early 20th century, when war was regarded as a symbol of civilization, this is priceless progress. Even if this civilization is still only the seed that has just been sown. But I believe she will, as I.M. Pei said:

“You can never know what you have sown when can harvest; perhaps only a harvest, perhaps a repeatable harvest. You may forget what had planted, a kind of experience, a kind of feeling, the relations with someone, and a traditional or a philosophy. Then, suddenly blossomed, called by different environment Wake up. This bloom can break through fences and whole ages.”

How I wish I could see with my own eyes, some day in a few generations, the predation of man on one another, like smallpox, has disappeared from human society; And sharing has become the moral axiom pursued by mankind as a whole. If such a society is what we aspire to today; Then, this direction and hope, is worth you and I spend a lifetime of energy to work hard. Of course, this is only a personal hope, AND I know that the world is not so beautiful. But if you decide not to do a thing because you pretend it is impossible; It was an assumption of victory, not an actual fact. Perhaps the history of the past, which had proved the triumph of justice, justice, equality and ideal, was ephemeral; So what. As long as we don’t give up hope, hope has a chance to become real. Today, all the good will come from this, tomorrow also.

Over the years, I’ve seen a lot of efforts on the Internet. I believe, too, that the seeds of a new civilization will one day reveal stunning and delightful landscapes. There are nations in the future that we have not yet discovered. I believe that we can find the untrodden passage, open the unopened door, and enter the rose garden….. It will be a new civilization.