By definition, every TCP/IP application is a client/server application. In this scenario the client makes requests of a server. That request flows down the TCP/IP protocol stack, across the network, and up the stack on the destination host. Whether the server exists on the same host, another host of the same LAN, or on a host located on another network, the information always flows through the protocol stack.
From the information presented to this point, the client/server model has some general characteristics:
The server provides services and the client consumes services.
The relationship between the client and the server is machine-independent.
A server services many clients and regulates their access to resources.
The client and server can exist on different hardware platforms.
The exchange between client and server is a message-based interaction.
The server’s methodology is not important to the client.
The client carries the bulk of the processing workload so that the server is free to serve a large number of clients.
The server becomes a client to another server when it needs information beyond that which it manages.
By specifying only the interface between the Application layer and the Transport layer, the TCP/IP Application layer permits various Application layer models. This open-ended approach to the Application layer makes it difficult to draw a single model that illustrates all TCP/IP applications. On one end of the scale, applications run as shell-level commands; on the other, applications run in various window environments. For example, the traditional telnet is run from the shell. Yet, some implementations of the telnet client take advantage of windows technology. To make life more complicated, telnet implementations are also available for the distributed computing environment (DCE). C++ client/server applications use the Object Management Group’s (OMG) Common Object Request Broker Architecture (CORBA) model. Consequently, trying to define a universal Application layer model is an exercise in futility.
However, even with all the variations, the Web browser continues to grow as a popular Windows environment for the implementation of the client side of the equation.
Applications, Plug-Ins, and Applets
Not too long ago, programmers developed applications; now they develop applications, plug-ins, and applets. Although a program is a program, the name attached to it tells us something about the nature of the program. Alas, there are more gray zones than black and white ones. In spite of this overlap, some well-defined characteristics separate applications, plug-ins, and applets.
Starting with an application, the common characteristics are that:
It is a standalone program.
A desktop program, including Web browsers, invokes an application in a separate window.
An application normally implements a specific application protocol such as FTP, telnet, or SMTP.
On the other hand, a plug-in’s characteristics are that:
It represents an extension to a Web browser.
It implements a specific MIME type in an HTML document.
It normally operates within the browser window.
And then we have the Java applet. Is it a “small application,” or is it something else? A Java applet
Is written in the Java language and compiled by a Java compiler
Can be included in an HTML document
Is downloaded and executed when the HTML document is viewed
Requires the Java runtime to execute
Whereas applications and plug-ins must be ported to each hardware platform, applets run on any platform that has a Java runtime. Thus, applets provide an object-oriented, multiplatform environment for the development of applications.
Most historical reviews of the Internet imply that networking began with ARPAnet. In a sense, digital transmission of data began when Samuel B. Morse publicly demonstrated the telegraph in 1844. In 1874, Thomas Edison invented the idea of multiplexing two signals in each direction over a single wire. With higher speeds and multiplexing, Edison’s teletype replaced Morse’s manual system; and a few teletype installations still exist today.
In 1837 both Sir Charles Wheatstone in Great Britain and Samuel B. Morse in the United States announced their telegraphic inventions.
The early telegraph systems were, in modern terms, point-to-point links. As the industry grew, switching centers acted as relay stations and paper tape was the medium that the human routers used to relay information from one link to another. Figure 1.1 illustrates a simple single-layer telegraphic network configuration. Figure 1.2 shows a more complex multilayered network.
The links of these networks were point-to-point asynchronous serial connections. For a paper tape network, the incoming information was punched on paper tape by high-speed paper tape punches and was then manually loaded on an outgoing paper tape reader.
Although this activity might seem like ancient history to younger readers, let us put this story into a more understandable framework. In early 1962, Paul Baran and his colleagues at the Rand Corporation were tackling the problem of how to build a computer network that would survive a nuclear war.
The year 1969 was a year of milestones. Not only did NASA place the first astronauts on the moon but also, and with much less fanfare, Department of Defense’s Advanced Research Projects Agency (ARPA) contracted with Bolt, Baranek, and Newman (BBN) to develop a packet-switched network based on Paul Baran’s ideas. The initial project linked computers at the University of California at Los Angeles (UCLA), Stanford Research Institute (SRI) in Menlo Park, California, and University of Utah in Salt Lake City, Nevada. On the other side of the continent from the ARPAnet action, Brian W. Kernighan and Dennis M. Ritchie brought UNIX to life at Bell Labs (now Lucent Technologies) in Murray Hills, New Jersey.
Even though message switching was well known, the original ARPAnet provided only three services: remote login (telnet), file transfer, and remote printing. In 1972, when ARPAnet consisted of 37 sites, e-mail joined the ranks of ARPAnet services. In October 1972 ARPAnet was demonstrated to the public at the International Conference on Computer Communications in Washington, D.C. In the following year, TCP/IP was proposed as a standard for ARPAnet.
The amount of military-related traffic continued to increase on ARPAnet. In 1975 the Defense Communications Agency (DCA) changed its name to DARPA (Defense Advanced Research Projects Agency) and took control of ARPAnet. Many non-government organizations wanted to connect to ARPAnet, but DARPA limited private sector connections to defense-related organizations. This policy led to the formation of other networks such as BBN’s commercial network Telenet.
The year 1975 marked the beginning of the personal computer industry’s rapid growth. In those days when you bought a microcomputer, you received bags of parts that you then assembled. Assembling a computer was a lot of work, for a simple 8KB memory card required over 1,000 solder connections. Only serious electronic hobbyists, such as those who attended the Home Brew computer club meetings at the Stanford Linear Accelerator Laboratories on Wednesday nights, built computers.
In 1976, four years after the initial public announcement that ARPAnet would use packet-switching technology, telephone companies from around the world through the auspices of CCITT(Consultative Committee for International Telegraphy and Telephony) announced the X.25 standard. Although both ARPAnet and X.25 used packet switching, there was a crucial difference in the implementations. As the precursor of TCP/IP, the ARPAnet protocol was based on the end-to-end principle; that is, only the ends are trusted and the carrier is considered unreliable.
On the other hand, the telephone companies preferred a more controllable protocol. They wanted to build packet-switched networks that used a trusted carrier, and they (the phone companies) wanted to control the input of network traffic. Therefore, CCITT based the X.25 protocol on the hop-to-hop principle in which each hop verified that it received the packet correctly. CCITT also reduced the packet size by creating virtual circuits.
In contrast to ARPAnet, in which every packet contained enough information to take its own path, with the X.25 protocol the first packet contains the path information and establishes a virtual circuit. After the initial packet, every other packet follows the same virtual circuit. Although this optimizes the flow of traffic over slow links, it means that the connection depends on the continued existence of the virtual circuit.
The end-to-end principle of TCP/IP and the hop-to-hop principle of X.25 represent opposing views of the data transfer process between the source and destination. TCP/IP assumes that the carrier is unreliable and that every packet takes a different route to the destination, and does not worry about the amount of traffic flowing through the various paths to the destination. On the other hand, X.25 corrects errors at every hop to the destination, creates a single virtual path for all packets, and regulates the amount of traffic a device sends to the X.25 network.
The year 1979 was another milestone year for the future of the Internet. Computer scientists from all over the world met to establish a research computer network called Usenet. Usenet was a dial-up network using UUCP (UNIX-to-UNIX copy). It offered Usenet News and mail servers. The mail service required a user to enter the entire path to the destination machine using the UUCP bang addressing wherein the names of the different machines were separated by exclamation marks (bangs). Even though I sent mail on a regular basis, I always had problems getting the address right. Only a few UUCP networks are left today, but Usenet News continues as NetNews. Also in 1979, Onyx Systems released the first commercial version of UNIX on a microcomputer.
The most crucial event for TCP/IP occurred on January 1, 1983, when TCP/IP became the standard protocol for ARPAnet, which provided connections to 500 sites. On that day the Internet was born. Since the late 1970s, many government, research, and academic networks had been using TCP/IP; but with the final conversion of ARPAnet, the various TCP/IP networks had a protocol that facilitated internetworking. In the same year, the military part of ARPAnet split off to form MILNET. As the result of funding from DARPA, the University of California’s Berkeley Software Distribution released BSD 4.2 UNIX with a TCP/IP stack. In addition, Novell released NetWare based on the XNS protocol developed at Xerox Park, Proteon shipped a software base router using the PDP-11, and C++ was transformed from an idea to a viable language.
That was the year in which the idea of building local-area networks (LANs) was new and hot. With the introduction of LANs, the topology of networks changed from the representation shown in Figure 1.2, which ties legacy systems together, to that shown in Figure 1.3, which ties LANs together.
With the growth in number of organizations connecting to ARPAnet and the increasing number of LANs connected to ARPAnet, another problem surfaced. TCP/IP routes traffic according to the destination’s IP address.
The IP address is a 32-bit number divided into four octets for the sake of human readability. Whereas computers work with numbers, humans remember names better than numbers. When ARPAnet was small, systems used the host file (in UNIX the file is /etc/hosts) to resolve names to Internet Protocol (IP) addresses. The Network Information Center (NIC) maintained the master file, and individual sites periodically downloaded the file. As the size of the ARPAnet grew, this arrangement became unmanageable in a fast-growing and dynamic network.
In 1984 the domain name system (DNS) replaced downloading the host file from NIC (the section “IP Addresses and Domain Names” discusses the relationship between the two in more detail). With the implementation of DNS, the management of mapping names to addresses moved out to the sites themselves.
For the next seven years, the Internet entered a growth phase. In 1987 the National Science Foundation created NFSNET to link super-computing centers via a high-speed backbone (56Kbps). Although NFSNET was strictly noncommercial, it enabled organizations to obtain an Internet connection without having to meet ARPAnet’s defense-oriented policy. By 1990 organizations connected to ARPAnet completed their move to NSFNET, and ARPAnet ceased to exist. NSFNET closed its doors five years later, and commercial providers took over the Internet world.
Until 1990 the primary Internet applications were e-mail, listserv, telnet, and FTP. In 1990, McGill University introduced Archie, an FTP search tool for the Internet. In 1991, the University of Minnesota released Gopher.
Gopher’s hierarchical menu structure helped users organize documents for presentation over the Internet. Gopher servers became so popular that by 1993 thousands of Gopher servers contained over a million documents. To find these documents, a person used the Gopher search tool Veronica (very easy rodent-oriented netwide index to computerized archives). These search tools are important, but they are not the ones that sparked the Internet explosion.
In 1992 Tim Berners-Lee, a physicist at CERN in Geneva, Switzerland, developed the protocols for the World Wide Web (WWW). Seeking a way to link scientific documents together, he created the Hypertext Markup Language (HTML), which is a subset of the Standard Generalized Markup Language (SGML). In developing the WWW, he drew from the 1965 work of Ted Nelson, who coined the word hypertext. However, the event that really fueled the Internet explosion was the release of Mosaic by the National Center for Supercomputing (NCSA) in 1993.
From a standard for textual documents, HTML now includes images, sound, video, and interactive screens via the common gateway interface (CGI), Microsoft’s ActiveX (previously called control OLE), and Sun Microsystem’s Java. The changes occur so fast that the standards lag behind the market.
How large is the Internet today?
That is a good question. We could measure the size of the Internet by the number of network addresses granted by InterNIC, but these addresses can be “subnetted,” so the number of networks is much larger than InterNIC figures suggest. We could measure the size of the Internet by the number of domain names, yet some of these names are vanity names (a domain name assigned to an organization, but supported by servers that support multiple domain names) and other aliases. Vanity names and aliases result in a higher name count than the number of IP addresses, because multiple names point to the same IP address.
Starting in the fall of 1995, companies and organizations began to include their uniform resources locator (URL), along with their street address, telephone number, and fax number, in television ads, newspaper ads, and consumer newsletters. Therefore, a company’s presence on the Internet, as represent by its Web address (the URL), reached a new level of general acceptance. The Internet emerged from academia to become a household word.
The question arises as to where all this technology is going. Because my crystal ball is broken, please don’t hold me to what I say.
Internet Explorer does not have access to window.location.origin, which is a bummer because it is a pretty handy variable to have, but we can make it work with a fairly straight forward check because we access .origin;
This means given these three snippets of XML from one well-formed document:
<!ENTITY MyParamEntity "Has been expanded">
Within this comment I can use ]]>
and other reserved characters like <
&, ', and ", but %MyParamEntity; will not be expanded
(if I retrieve the text of this node it will contain
%MyParamEntity; and not "Has been expanded")
and I can't place two dashes next to each other.
Within this Character Data block I can
use double dashes as much as I want (along with <, &, ', and ")
*and* %MyParamEntity; will be expanded to the text
"Has been expanded" ... however, I can't use
the CEND sequence (if I need to use it I must escape one of the
brackets or the greater-than sign).
Why does it look so weird?
The CDATA section is a marked section. In SGML there is both an abstract syntax as well as a concrete syntax. The abstract syntax of a marked section declaration begins with a markup declaration open(mdo) delimiter followed by a declaration subset open (dso) delimiter. A status keyword comes next followed by a second declaration subset open (dso) delimiter. A marked section ends with a marked section close (msc) delimiter followed by a markup declaration close (mdc) delimiter. Therefore the abstract syntax of a marked section declaration is:
mdo dso status-keyword dso my-data msc mdc
A concrete syntax is defined for each document. This syntax is specified within the SGML declaration associated with each document. The concrete syntax defines the delimiters to be used for the document. The default SGML delimiters, which I assume are defined in ISO 8879:1986, are as follows:
Markup declaration open: <!
Declaration subset open: [
Marked section close: ]]
Markup declaration close: >
But you are free to define your own concrete syntax and so can modify the characters used as the delimiters.
Therefore the default concrete syntax of a marked section declaration is:
<![ status-keyword [my-data]]>
Possible status-keywords are: CDATA, RCDATA, IGNORE, INCLUDE, TEMP
CSS is not at its best when it comes to creating a layout. The flexible box model that is intended for this purpose is not yet ready to use due to the lack of support from Internet Explorer, so designers usually have to use floats or set an element’s display property to inline-block to achieve effects they want. This CSS’s inability is even more bothersome when you want to make your website responsive.
In this post I’ll focus on a specific problem — that is how to write styles when you want to have a fluid content box together with a fixed-size content box that are next to each other and they’re taking the same horizontal space.
Vertical space is always limited by the size of the user’s screen or browser’s window and only has a fixed amount of pixels (100% in relative units) available to use. This amount changes depending on the screen/window size. That’s why the fixed-size boxes will always take different relative space expressed in %. And here is the problem. You need to declare a width of a fluid-box in percentage, but you can’t know how many percent will be taken by fixed-size boxes. Whatever you will write, either the fluid-box will overlap with the fixed-one, or a space between them will be too big. That’s why tricks are necessary to make your website beautiful.
As a solution I’ll use two methods. The first one is to use negative-margins on floated elements. It is less intuitive, requires more rules in CSS and additional wrapper in HTML, but is supported in all browsers and, in some cases, more flexible. The second one is to use the table-cell value of the display property. This method is easier, require less CSS rules and HTML syntax and gives additional value: vertical alignment and the same height of all elements in a row but is not supported by IE7-. Which method will you use is up to you and your needs.
Fluid and fixed-size content — solution
I’ll use the classic layout: big content-box with a small, fixed-size sidebar on the right. Any other variations such as two fluid boxes with one fixed box, two fixed boxes with one fluid in the middle, can be easily made from this basic one.
The syntax for the negative-margin solution and the table-cell solution will differ slightly. With the negative-margin we need additional block element (in this example the div withid=inner-block) that is not necessary when working with table-cells.
Inside those block you can put almost any other content: text, another block, list, images and so on. In the inner-block you can even put another flexible blocks to make a layout with more than one flexible column.
Negative margin solution
The CSS for the negative-margin solution goes like that:
margin-right: -250px; /* The size of the fixed block. */
margin-right: 250px; /* The size of the fixed block. */
width: 250px; /* The size of the fixed block. */
There are some points that require explanation.
I assumed the fixed-width block will be 250px big. You have to change that value to suit your needs but pay attention that #fluid block has a negative margin. Also you might want to add some space between those blocks. The easiest way to do this is to increase the #inner-block’s margin by the value of margin you want to have between the #fixed-width and #fluid blocks.
If you’re wondering what’s really happening there, it’s simple. The container #fluid box and the sidebar #fixed-width are both floated, so in normal circumstances, they’ll be positioned next to each other in vertical space. But since floated block elements take as much space as their content require, there’s not enough horizontal space for other floated elements in a browser’s window — #fluid takes it all. That’s why we need to declare two rules. First we need to pull the sidebar into the content area using negative value in margin-right property. But it’s not enough. Right now browsers will just increase the size of our floated box by the negative margin. That’s why the rule width: 100% is required to keep the #fluid’s size in the window or — in other cases — inside of the #fluid’s container element.
Now the only issue is that the content of both elements overlap. Hence the #inner-block that has the margin big enough to separate both elements’ content.
This one is simpler, more intuitive but is not supported by IE7-. We also don’t need the #inner-block element. The CSS:
And that’s (almost) all. Just three rules. What we did is just to change the elements display behaviour to that of cells in a table. Cells that don’t have declared width will automatically take remaining space, which means they’ll be responsive. The only issue that may arise when using this method is that it might not work unless you set the display property of the container, that is a parent of both #fluid and #fixed-width, into table (display: table). The<body> is also that kind of a container, so you can also make it display as table.
The example I used is a good base for other types of layouts. If you want to have two fluid content boxes just put two block elements inside of #inner-block or directly into #fluid (depending on which method you use) and set their widths in relative units (%) without any haxes. In case you would like to have two fixed size boxes and fluid box in the middle, just put the left fixed box before the #fluid div in your HTML, make it float: left, set its width, add negative left margin to #fluid and left margin to #inner-block.
So what are you waiting for? Make your website beautifuly responsive!