Sunday, August 23, 2009

Hypertext Transfer Protocol (HTTP)

Hypertext Transfer Protocol (HTTP) – RFC 2616

The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. HTTP has been in use by the World-Wide Web global information initiative since 1990. The first version of HTTP, referred to as HTTP/0.9, was a simple protocol for raw data transfer across the Internet. HTTP allows an open-ended set of methods and headers that indicate the purpose of a request. It builds on the discipline of reference provided by the Uniform Resource Identifier (URI), as a location (URL) or name (URN), for indicating the resource to which a method is to be applied. Messages are passed in a format similar to that used by Internet mail as defined by the Multipurpose Internet Mail Extensions (MIME). HTTP is also used as a generic protocol for communication between user agents and proxies/gateways to other Internet systems, including those supported by the SMTP, NNTP, FTP, Gopher, and WAIS protocols. In this way, HTTP allows basic hypermedia access to resources available from diverse applications.
The HTTP protocol is a request/response protocol. A client sends a request to the server in the form of a request method, URI, and protocol version, followed by a MIME-like message containing request modifiers, client information, and possible body content over a connection with a server. The server responds with a status line, including the message's protocol version and a success or error code, followed by a MIME-like message containing server information, entity meta information, and possible entity-body content. Most HTTP communication is initiated by a user agent and consists of a request to be applied to a resource on some origin server.
HTTP uses a "." numbering scheme to indicate versions of the protocol. The protocol versioning policy is intended to allow the sender to indicate the format of a message and its capacity for understanding further HTTP communication, rather than the features obtained via that communication.

Reference:
Information science Institute. (1981). Hypertext Transfer Protocol. Retrieved August 23, 2009, from http://www.ietf.org/rfc/rfc2616.txt?number=2616

Domain Name Service (DNS)

Domain Name Service (DNS) – RFC 1035
Domain Name Service resolves hostnames, specifically Internet names. You do not have to use DNS, you can just type the IP address of any device you want to communicate with. An IP address identifies hosts on a network and the internet as well. However, DNS was designed to make our lives easier. DNS allows you to use a domain name to specify an IP address. The goal of domain names is to provide a mechanism for naming resources in such a way that the names are usable in different hosts, networks, protocol families, internets, and administrative organizations. Name servers manage two kinds of data. The first kind of data held in sets called zones; each zone is the complete database for a particular "pruned" sub tree of the domain space. This data is called authoritative. A name server periodically checks to make sure that its zones are up to date, and if not, obtains a new copy of updated zones

Reference:
Information science Institute. (1981). Domain Name Service. Retrieved August 22, 2009, from http://www.ietf.org/rfc/rfc1035.txt?number=1035

Dynamic Host Configuration Protocol (DHCP)

Dynamic Host Configuration Protocol (DHCP) - RFC 2131

The Dynamic Host Configuration Protocol (DHCP) provides configuration parameters to Internet hosts. DHCP consists of two components: a protocol for delivering host-specific configuration parameters from a DHCP server to a host and a mechanism for allocation of network addresses to hosts. DHCP is built on a client-server model, where designated DHCP server hosts allocate network addresses and deliver configuration parameters to dynamically configured hosts. The Dynamic Host Configuration Protocol (DHCP) provides a framework for passing configuration information to hosts on a TCPIP network. DHCP is based on the Bootstrap Protocol (BOOTP).
Dynamic Host Configuration Protocol (DHCP) gives IP addresses to hosts. It allows easier administration and works well in small to even very large network environments. All types of hardware can be used as a DHCP server. Here is a list of the information a DHCP server can provide: IP address, Subnet mask, Domain name, Default gateway (routers), DNS, WINS information.
DHCP supports three mechanisms for IP address allocation. In "automatic allocation", DHCP assigns a permanent IP address to a client. In "dynamic allocation", DHCP assigns an IP address to a client for a limited period of time. In "manual allocation", a client's IP address is assigned by the network administrator, and DHCP is used simply to convey the assigned address to the client.

Reference:
Information science Institute. (1981). Dynamic Host Configuration Protocol. Retrieved August 22, 2009, from http://www.ietf.org/rfc/rfc02131.txt?number=2131

Transmission control Protocol (TCP)

Transmission Control Protocol (TCP) –RFC 793

The Transmission Control Protocol (TCP) is intended for use as a highly reliable host-to-host protocol between hosts in packet-switched computer communication networks, and in interconnected systems of such networks.
Transmission Control protocol (TCP) takes large blocks of information from an application and breaks them into segments. It numbers and sequences each segment so that the destination’s TCP protocol can put the segments back into the order the application intended. After these segments are sent, TCP waits for an acknowledgement of the receiving end’s TCP virtual circuit session, retransmitting those that are not acknowledged.
TCP is a full-duplex, connection oriented, reliable, and accurate protocol, but establishing all these terms and conditions, in addition to error checking , is no small task. TCP is very complicated and, not surprisingly, costly in terms of network overhead.
TCP is a connection-oriented, end-to-end reliable protocol designed to fit into a layered hierarchy of protocols which support multi-network applications. The TCP provides for reliable inter-process communication between pairs of processes in host computers attached to distinct but interconnected computer communication networks. Very few assumptions are made as to the reliability of the communication protocols below the TCP layer.
The TCP is intended to provide a reliable process-to-process communication service in a multi network environment. The TCP is intended to be a host-to-host protocol in common use in multiple networks. The primary purpose of the TCP is to provide reliable, securable logical circuit or connection service between pairs of processes. The TCP is able to transfer a continuous stream of octets in each direction between its users by packaging some number of octets into segments for transmission through the internet system. The TCP must recover from data that is damaged, lost, duplicated, or delivered out of order by the internet communication system. TCP provides a means for the receiver to govern the amount of data sent by the sender. To allow for many processes within a single Host to use TCP communication facilities simultaneously, the TCP provides a set of addresses or ports within each host. The reliability and flow control mechanisms described above require that TCPs initialize and maintain certain status information for each data stream.

Reference:
Information science Institute. (1981). Transmission Control protocol. Retrieved August 22, 2009, from http://www.ietf.org/rfc/rfc0793.txt?number=793

Internet Protocol(IP)

Internet Protocol (IP) – RFC 791

Internet protocol (IP) essentially is the layer. It is designed for use in interconnected systems of packet-switched computer communication networks. IP holds the big picture and that is aware of all the interconnected networks. It can do this because all the machines on the network have software, or logical, address called an IP address. IP looks at each packet’s address. Then using a routing table, it decides where a packet is to be sent next, choosing the best path. The protocols of the Network access layer at the bottom of the DoD model do not possess IP’s enlightened scope of the network; they deal only with physical links (local networks).
IP receives segments from the Host to Host layer and fragments them into datagram (packets) if necessary. IP then reassembles datagram back into segments on the receiving side. Each datagram is assigned the IP address of the sender and of the recipient. Each router that receives a datagram makes routing decisions based on the packet’s destination IP address.
The internet protocol is specifically limited in scope to provide the functions necessary to deliver a package of bits (an internet datagram) from a source to a destination over an interconnected system of networks.
The internet protocol implements two basic functions: addressing and fragmentation.
The internet protocol uses four key mechanisms in providing its service: Type of Service, Time to Live, Options, and Header Checksum. The Type of Service is used to indicate the quality of the service desired. The Time to Live is an indication of an upper bound on the lifetime of an internet datagram. The Options provide for control functions needed or useful in some situations but unnecessary for the most common communications. Header Checksum provides a verification that the information used in processing internet datagram has been transmitted correctly.
The internet protocol does not provide a reliable communication facility. There are no acknowledgments either end-to-end or
Hop-by-hop. There is no error control for data, only a header checksum. There are no retransmissions. There is no flow control. Errors detected may be reported via the Internet Control Message Protocol (ICMP).

Reference:
Information science Institute. (1981). Internet Protocol. Retrieved August 22, 2009, from http://www.ietf.org/rfc/rfc0791.txt?number=791

What is Ethernet, Local Talk, Token Ring, FDDI, ATM?

Ethernet
Ethernet is a physical and data link layer technology for
local area networks (LANs). Ethernet was invented by engineer Robert Metcalfe.
It uses wires (meaning it is not a wireless technology).
The wires used for a LAN are mostly those headed by an RJ-45 jack, which is similar to the jack plugged into your telephone set, but twice as big. Some Ethernet networks use coaxial cables, but that’s rarer, and present in rather large LANs, which span over areas between buildings. Ethernet is by far the most popular LAN protocol used today. It is so popular that if you buy a network card to install on your machine, you will get an Ethernet card, unless you ask for something different, if of course that different protocol is available.
Ethernet has evolved over the years. Today, you can get cheap Ethernet LAN cards with speeds up to 100
Mbps; while the fastest Ethernet reaches Gbps (1 Gbps = 1000 Mbps) in speed.
When first widely deployed in the 1980s, Ethernet supported a maximum theoretical data rate of 10
megabits per second (Mbps). Later, Fast Ethernet standards increased this maximum data rate to 100 Mbps. Today, Gigabit Ethernet technology further extends peak performance up to 1000 Mbps.
Higher level network protocols like
Internet Protocol (IP) use Ethernet as their transmission medium. Data travels over Ethernet inside protocol units called frames.
The run length of individual Ethernet cables is limited to roughly 100 meters, but Ethernet can be easily extended to link entire schools or office buildings using
network bridge devices.

Local Talk
When the folks at Apple decided to add networking to their computers, they created a unique networking technology called Local Talk. Local Talk used a bus topology with each device daisy-Chained to the next on the segment, and proprietary cabling with small round DIN-style connectors. The rise of Ethernet, along with Local Talk’s very slow speed, led to the demise of Local Talk.

Token Ring
A Token Ring network is a local area network (
LAN) in which all computers are connected in a ring or star topology and a bit- or token-passing scheme is used in order to prevent the collision of data between two computers that want to send messages at the same time. The Token Ring protocol is the second most widely-used protocol on local area networks after Ethernet. The IBM Token Ring protocol led to a standard version, specified as IEEE 802.5. Both protocols are used and are very similar. The IEEE 802.5 Token Ring technology provides for data transfer rates of either 4 or 16 megabits per second. Very briefly, here is how it works:
Empty information frames are continuously circulated on the ring.
When a computer has a message to send, it inserts a token in an empty frame (this may consist of simply changing a 0 to a 1 in the token bit part of the frame) and inserts a message and a destination identifier in the frame.
The frame is then examined by each successive workstation. If the workstation sees that it is the destination for the message, it copies the message from the frame and changes the token back to 0.
When the frame gets back to the originator, it sees that the token has been changed to 0 and that the message has been copied and received. It removes the message from the frame.
The frame continues to circulate as an "empty" frame, ready to be taken by a workstation when it has a message to send.
The token scheme can also be used with
bus topology LANs.
The standard for the Token Ring protocol is Institute of Electrical and Electronics Engineers (
IEEE) 802.5. The Fiber Distributed-Data Interface (FDDI) also uses a Token Ring protocol.

FDDI
FDDI stands for Fiber Distributed Data Interface. The FDDI standard is ANSI X3T9.5 . The FDDI topology is ring with two counter rotating rings for reliability with no hubs. Cable type is fiber-optic. Connectors are specialized. The media access method is token passing. Multiple tokens may be used by the system. The maximum length is 100 kilometers. The maximum number of nodes on the network is 500. Speed is 100 Mbps. FDDI is normally used as a backbone to link other networks. A typical FDDI network can include servers, concentrators, and links to other networks.
FDDI token passing allows multiple frames to circulate around the ring at the same time. Priority levels of a data frame and token can be set to allow servers to send more data frames. Time sensitive data may also be given higher priority. The second ring in a FDDI network is a method of adjusting when there are breaks in the cable. The primary ring is normally used, but if the nearest downstream neighbor stops responding the data is sent on the secondary ring in attempt to reach the computer. Therefore a break in the cable will result in the secondary ring being used. There are two network cards which are:
Dual attachment stations (DAS) used for servers and concentrators are attached to both rings.
Single Attachment stations (SAS) attached to one ring and used to attach workstations to concentrators.
A router or switch can link an FDDI network to a local area network (LAN). Normally FDDI is used to link LANs together since it covers long distances.

ATM
ATM is a high-speed networking standard designed to support both voice and data communications. ATM is normally utilized by Internet service providers on their private long-distance networks. ATM operates at the data link layer (Layer 2 in the
OSI model) over either fiber or twisted-pair cable.
The performance of ATM is often expressed in the form of OC (Optical Carrier) levels, written as "OC-xxx." Performance levels as high as 10
Gbps (OC-192) are technically feasible with ATM. More common performance levels for ATM are 155 Mbps (OC-3) and 622 Mbps (OC-12).
ATM technology is designed to improve utilization and
quality of service (QoS) on high-traffic networks. Without routing and with fixed-size cells, networks can much more easily manage bandwidth under ATM than under Ethernet, for example. The high cost of ATM relative to Ethernet is one factor that has limited its adoption to "backbone" and other high-performance, specialized networks.

References:
http://compnetworking.about.com/cs/ethernet1/g/bldef_ethernet.htm
http://compnetworking.about.com/od/networkprotocols/g/bldef_atm.htm
http://www.comptechdoc.org/independent/networking/cert/netfddi.html




Monday, August 17, 2009

Enterprise Architecture (EA)

Enterprise Architecture (EA)

Enterprise architecture is a comprehensive framework used to manage and align an organization's Information Technology (IT) assets, people, operations, and projects with its operational characteristics. In other words, the enterprise architecture defines how information and technology will support the business operations and provide benefit for the business. Enterprise Architecture has become a common practice for large IT organizations. For the first time there is a methodology to encompass all of the various IT aspects and processes into a single practice. However, realizing the full potential of Enterprise Architecture (EA) can be challenging. There are many aspects to EA, including architecture planning, governance, taxonomies and ontology, all of which impact its success. Without the right guidance, tools, frameworks and methodologies EA can quickly become unwieldy.
It illustrates the organization’s core mission, each component critical to performing that mission, and how each of these components is interrelated. These components include:
· Guiding principles
· Organization structure
· Business processes
· People or stakeholders
· Applications, data, and infrastructure
· Technologies upon which networks, applications and systems are built
Guiding principles, organization structure, business processes, and people don’t sound very technical. That’s because enterprise architecture is about more than technology. It is about the entire organization (or enterprise) and identifying all of the bits and pieces that make the organization work.
Enterprise Architecture’s Benefits
Well-documented, well-understood enterprise architecture enables the organization to respond quickly to changes in the environment in which the organization operates. It serves as a ready reference that enables the organization to assess the impact of the changes on each of the enterprise architecture components. It also ensures the components continue to operate smoothly through the changes.