Sunday, September 16, 2007

High-Definition Multimedia Interface (HDMI)


The High-Definition Multimedia Interface (HDMI) is a licensable audio/video connector interface for transmitting uncompressed, encrypted digital streams. HDMI connects DRM-enforcing digital audio/video sources, such as a set-top box, a Blu-ray Disc player, a PC running Windows Vista, a video game console, or an AV receiver, to a compatible digital audio device and/or video monitor, such as a digital television (DTV). HDMI began to appear in 2006 on prosumer HDTV camcorders and high-end digital still cameras.

It represents the DRM alternative to consumer analog standards such as RF (coaxial cable), composite video, S-Video, SCART, component video and VGA, and digital standards such as DVI (DVI-D and DVI-I).


General notes
--------------------------------------------
HDMI supports any TV or PC video format, including standard, enhanced, or high-definition video, plus multi-channel digital audio on a single cable. It is independent of the various DTV standards such as ATSC, and DVB (-T,-S,-C), as these are encapsulations of the MPEG movie data streams, which are passed off to a decoder, and output as uncompressed video data on HDMI. HDMI encodes the video data into TMDS for transmission digitally over HDMI.

Devices are manufactured to adhere to various versions of the specification, where each version is given a number, such as 1.0 or 1.3. Each subsequent version of the specification uses the same cables, but increases the throughput and/or capabilities of what can be transmitted over the cable. For example, previously, the maximum pixel clock rate of the interface was 165 MHz, sufficient for supporting 1080p at 60 Hz or WUXGA (1920x1200), but HDMI 1.3 increased that to 340 MHz, providing support for WQXGA (2560x1600) and beyond across a single digital link. See also: HDMI Versions.

HDMI also includes support for 8-channel uncompressed digital audio at 192 kHz sample rate with 24 bits/sample as well as any compressed stream such as Dolby Digital, or DTS. HDMI supports up to 8 channels of one-bit audio, such as that used on Super Audio CDs at rates up to 4x that used by Super Audio CD. With version 1.3, HDMI now also supports lossless compressed streams such as Dolby TrueHD and DTS-HD Master Audio.

HDMI is backward-compatible with the single-link Digital Visual Interface carrying digital video (DVI-D or DVI-I, but not DVI-A) used on modern computer monitors and graphics cards. This means that a DVI-D source can drive an HDMI monitor, or vice versa, by means of a suitable adapter or cable, but the audio and remote control features of HDMI will not be available. Additionally, without support for High-bandwidth Digital Content Protection (HDCP) on the display, the signal source may prevent the end user from viewing or recording certain restricted content.

In the U.S., HDCP-support is a standard feature on digital TVs with built-in digital (ATSC) tuners, (it does not feature on the cheapest digital TVs, as they lack HDMI altogether). Among the PC-display industry, where computer displays rarely contain built-in tuners, HDCP support is absent from many models. For example, the first LCD monitors with HDMI connectors did not support HDCP, and few compact-LCD monitors (17" or smaller) support HDCP.

The HDMI Founders include consumer electronics manufacturers Hitachi, Matsushita Electric Industrial (Panasonic/National/Quasar), Philips, Sony, Thomson (RCA), Toshiba, and Silicon Image. Digital Content Protection, LLC (a subsidiary of Intel) is providing HDCP for HDMI. In addition, HDMI has the support of major motion picture producers Fox, Universal, Warner Bros., and Disney, and system operators DirecTV and EchoStar (Dish Network) as well as CableLabs and Samsung.

Specifications
--------------------------------------------
HDMI defines the protocol and electrical specifications for the signaling, as well as the pin-out, electrical and mechanical requirements of the cable and connectors.

Connectors
The HDMI Specification has expanded to include three connectors, each intended for different markets.

The standard Type A HDMI connector has 19 pins, with bandwidth to support all SDTV, EDTV and HDTV modes and more. The plug outside dimensions are 13.9 mm wide by 4.45 mm high. Type A is electrically compatible with single-link DVI-D.

A higher resolution version called Type B is defined in HDMI 1.0. Type B has 29 pins (21.2 mm wide), allowing it to carry an expanded video channel for use with very high-resolution future displays, such as WQSXGA (3200x2048). Type B is electrically compatible with dual-link DVI-D, but is not in general use.

The Type C mini-connector is intended for portable devices. It is smaller than Type A (10.42 mm by 2.42 mm) but has the same 19-pin configuration.

Cable
The HDMI cable can be used to carry video, audio, and/or device-controlling signals (CEC). Adaptor cables, from Type A to Type C, are available.

TMDS channel
The Transition Minimized Differential Signaling (TMDS) channel:

-Carries video, audio, and auxiliary data via one of three modes called the Video Data Period, the Data Island Period, and the Control Period. During the Video Data Period, the pixels of an active video line are transmitted. During the Data Island period (which occurs during the horizontal and vertical blanking intervals), audio and auxiliary data are transmitted within a series of packets. The Control Period occurs between Video and Data Island periods.
-Signaling method: Formerly according to DVI 1.0 spec. Single-link (Type A HDMI) or dual-link (Type B HDMI).
-Video pixel rate: 25 MHz to 340 MHz (Type A, as of 1.3) or to 680 MHz (Type B). Video formats with rates below 25 MHz (e.g. 13.5 MHz for 480i/NTSC) transmitted using a pixel-repetition scheme. From 24 to 48 bits per pixel can be transferred, regardless of rate. Supports 1080p at rates up to 120 Hz and WQSXGA.
-Pixel encodings: RGB 4:4:4, YCbCr 4:4:4 (8–16 bits per component); YCbCr 4:2:2 (12 bits per component)
-Audio sample rates: 32 kHz, 44.1 kHz, 48 kHz, 88.2 kHz, 96 kHz, 176.4 kHz, 192 kHz.
-Audio channels: up to 8.
-Audio streams: any IEC61937-compliant stream, including high bitrate (lossless) streams (Dolby TrueHD, DTS-HD Master Audio).

Consumer Electronics Control channel
The Consumer Electronics Control (CEC) channel is optional to implement, but wiring is mandatory. The channel:

-Uses the industry standard AV Link protocol.
-Used for remote control functions.
-One-wire bidirectional serial bus.
-Defined in HDMI Specification 1.0, updated in HDMI 1.2a, and again in 1.3a (added timer and audio commands).
This feature is used in two ways:

-To allow the user to command and control multiple CEC-enabled boxes with one remote control, and
-To allow individual CEC-enabled boxes to command and control each other, without user intervention.
An example of the latter is to allow the DVD player, when the drawer closes with a disk, to command the TV and the intervening A/V Receiver (all with CEC) to power-up, select the appropriate HDMI ports, and auto-negotiate the proper video mode and audio mode. No remote control command is needed. Similarly, this type of equipment can be programmed to return to sleep mode when the movie ends, perhaps by checking the real-time clock. For example, if it is later than 11:00 p.m., and the user does not specifically command the systems with the remote control, then the systems all turn off at the command from the DVD player.

Alternative names for CEC are Anynet (Samsung), BRAVIA Theatre Sync (Sony), Regza Link (Toshiba), RIHD (Onkyo) and Viera Link/EZ-Sync (Panasonic/JVC).

Content protection
-According to High-bandwidth Digital Content Protection (HDCP) Specification 1.2.
-Beginning with HDMI CTS 1.3a, any system which implements HDCP must do so in a fully-compliant manner. HDCP compliance is itself part of the requirements for HDMI compliance.
-The HDMI repeater bit, technically the HDCP repeater bit, controls the authentication and switching/distribution of an HDMI signal.

Versions
--------------------------------------------

Devices are manufactured to adhere to various versions of the specification, where each version is given a revision number. Each subsequent version of the specification uses the same cables, but increases the throughput and capabilities of what can be transmitted over that cable. The need for a new HDMI cable if you already have one really depends on the cable (which also has a HDMI rating). The main thing to consider is if any current cable would be able to handle the increased bandwidth—for example the 10.2 Gbit/s that comes with version 1.3. Cable compliance testing is included in the HDMI Compliance Test Specification (see TESTID 5-3), with "Category 1" and "Category 2" defined in the HDMI Specification 1.3a (Section 4.2.6).

A product listed as having an HDMI version does not necessarily mean that it will have all of the features listed under the version classification, indeed some of the features are optional. For example in HDMI v1.3 it is optional to support the xvYCC wide color standard. This means if you have bought a camcorder that supports the wide color space (which for example is branded by Sony as "x.v.Color") you have to specifically check that the display supports both HDMI v1.3 and the xvYCC wide color standard.

HDMI 1.0
Released December 2002.

-Single-cable digital audio/video connection with a maximum bitrate of 4.9 Gbit/s. Supports up to 165 Mpixel/s video (1080p60 Hz or UXGA) and 8-channel/192 kHz/24-bit audio.

HDMI 1.1
Released May 2004.

-Added support for DVD Audio.

HDMI 1.2
Released August 2005.

-Added support for One Bit Audio, used on Super Audio CDs, up to 8 channels.
-Availability of HDMI Type A connector for PC sources.
-Ability for PC sources to use native RGB color-space while retaining the option to support the YCbCr CE color space.
-Requirement for HDMI 1.2 and later displays to support low-voltage sources.

HDMI 1.2a
Released December 2005.

-Fully specifies Consumer Electronic Control (CEC) features, command sets, and CEC compliance tests.

HDMI 1.3
Released 22 June 2006.

-Increases single-link bandwidth to 340 MHz (10.2 Gbit/s)
-Optionally supports 30-bit, 36-bit, and 48-bit xvYCC with Deep Color or over one billion colors, up from 24-bit sRGB or YCbCr in previous versions.
-Incorporates automatic audio syncing (Audio video sync) capability.
-Optionally supports output of Dolby TrueHD and DTS-HD Master Audio streams for external decoding by AV receivers. TrueHD and DTS-HD are lossless audio codec formats used on HD DVDs and Blu-ray Discs. If the disc player can decode these streams into uncompressed audio, then HDMI 1.3 is not necessary, as all versions of HDMI can transport uncompressed audio.
-Availability of a new mini connector for devices such as camcorders.

HDMI 1.3a
Released 10 November 2006.

-Cable and Sink modifications for Type C
-Source termination recommendation
-Removed undershoot and maximum rise/fall time limits.
-CEC capacitance limits changed
-RGB video quantization range clarification
-CEC commands for timer control brought back in an altered form, audio control commands added.
-Concurrently released compliance test specification included.

HDMI 1.3b
Testing specification released 26 March 2007.

Cable length
--------------------------------------------
The HDMI specification does not define a maximum cable length. As with all cables, signal attenuation becomes too high at a certain length. Instead, HDMI specifies a minimum performance standard. Any cable meeting that specification is compliant. Different construction quality and materials will enable cables of different lengths. In addition, higher performance requirements must be met to support video formats with higher resolutions and/or frame rates than the standard HDTV formats.

The signal attenuation and intersymbol interference caused by the cables can be compensated by using Adaptive Equalization.

HDMI 1.3 defined two categories of cables: Category 1 (standard or HDTV) and Category 2 (high-speed or greater than HDTV) to reduce the confusion about which cables support which video formats. Using 28 AWG, a cable of about 5 metres (~16 ft) can be manufactured easily and inexpensively to Category 1 specifications. Higher-quality construction (24 AWG, tighter construction tolerances, etc.) can reach lengths of 12 to 15 metres (~39 to 49 ft). In addition, active cables (fiber optic or dual Cat-5 cables instead of standard copper) can be used to extend HDMI to 100 metres or more. Some companies also offer amplifiers, equalizers and repeaters that can string several standard (non-active) HDMI cables together.

HDMI and high-definition optical media players
--------------------------------------------
Both introduced in 2006, Blu-ray Disc and HD DVD offer new high-fidelity audio features that require HDMI for best results. Dolby Digital Plus (DD+), Dolby TrueHD and DTS-HD Master Audio use bitrates exceeding TOSLINK's capacity. HDMI 1.3 can transport DD+, TrueHD, and DTS-HD bitstreams in compressed form. This capability would allow a preprocessor or audio/video receiver with the necessary decoder to decode the data itself, but has limited usefulness for HD DVD and Blu-ray.

HD DVD and Blu-ray permit "interactive audio", where the disc-content tells the player to mix multiple audio sources together, before final output. Consequently, most players will handle audio-decoding internally, and simply output LPCM audio all the time. Multichannel LPCM can be transported over an HDMI 1.1 (or higher) connection. As long as the audio/video receiver (or preprocessor) supports multi-channel LPCM audio over HDMI, and supports HDCP, the audio reproduction is equal in resolution to HDMI 1.3. However, many of the cheapest AV receivers do not support audio over HDMI and are often labeled as "HDMI passthrough" devices.

Note that all of the features of an HDMI version may not be implemented in products adhering to that version since certain features of HDMI, such as Deep Color and xvYCC support, are optional.

Criticism
--------------------------------------------
Among manufacturers, the HDMI specification has been criticized as lacking in functional usefulness. The public specification devotes many pages to the lower-level protocol layers (physical, electrical, logical); there is inadequate documentation for the system framework. HDMI-peripherals include audio/video sources, audio-only receivers, audio-video receivers, video-only receivers, repeaters (which have more downstream ports than upstream ports), and switchers (which have more upstream ports than downstream ports). The specification stops short of offering examples of system behavior involving multiple HDMI-devices, leaving implementation to the product engineer's interpretation. Even between devices which use chips from Silicon Image (a promoter and supplier of HDMI IP and silicon), interoperability is not assured. The industry is working to improve through plugfest events (i.e. manufacturer conferences) and more comprehensive design-validation services.

Another criticism of HDMI is that the connectors are not as robust as previous display connectors. Currently most devices with HDMI capability are utilizing surface-mount connectors rather than through-hole or reinforced connectors, making them more susceptible to damage from exterior forces. Tripping over a cable plugged into an HDMI port can easily cause damage to that port.

Closed captioning problems
According to the HDMI Specification, all video timings carried across the link for standard video modes (such as 720p, 1080i, etc.) must have horizontal and video timings matching those defined in the CEA-861D Specification. Since those definitions allow only for the visual portion of the frame (or field, for interlaced video modes), there is no line transmitted for closed captions. Line 21 is not part of the transmitted data as it is in analog modes. For HDMI it is but one of the non-data lines in the vertical blanking interval.

Although an HDMI display is allowed to define a 'native mode' for video, which could expand the active line count to encompass Line 21, most MPEG decoders cannot format a digital video stream to include extra lines—they send only vertical blanking. Even if it were possible, the closed captioning character codes would have to be encoded in some way into the pixel values in Line 21. This would then require the receiver logic in the display to decode those codes and construct the captions.

It is possible, although not standardized, that some measure of content in text form can be transmitted from Source to Sink using CEC commands, or using InfoFrame packets. Again, as there is no standardized format for such data it would likely work only between a source and sink system from the same manufacturer. Such uniqueness goes against the standardization mission of HDMI, which is focused in part on interoperability.

Of course, it is possible that a future enhancement of the HDMI Specification may encompass closed caption transport.

From Wikipedia, the free encyclopedia

Read More......

Thursday, September 13, 2007

Multiprotocol Label Switching (MPLS)


In computer networking and telecommunications, Multi Protocol Label Switching (MPLS) is a data-carrying mechanism that belongs to the family of packet-switched networks. MPLS operates at an OSI Model layer that is generally considered to lie between traditional definitions of Layer 2 (data link layer) and Layer 3 (network layer), and thus is often referred to as a "Layer 2.5" protocol. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames.


Background
---------------------------------------
A number of different technologies were previously deployed with essentially identical goals, such as frame relay and ATM. MPLS is now replacing these technologies in the marketplace, mostly because it is better aligned with current and future technology needs.

In particular, MPLS dispenses with the cell-switching and signaling-protocol baggage of ATM. MPLS recognizes that small ATM cells are not needed in the core of modern networks, since modern optical networks (as of 2001) are so fast (at 10 Gbit/s and well beyond) that even full-length 1500 byte packets do not incur significant real-time queuing delays (the need to reduce such delays, to support voice traffic, having been the motivation for the cell nature of ATM).

At the same time, it attempts to preserve the traffic engineering and out-of-band control that made frame relay and ATM attractive for deploying large-scale networks.

MPLS was originally proposed by a group of engineers from Ipsilon_Networks, but their "IP Switching" technology, which was defined only to work over ATM, did not achieve market dominance. Cisco Systems, Inc. introduced a related proposal, not restricted to ATM transmission, called "Tag Switching" when it was a Cisco proprietary proposal, and was renamed "Label Switching" when it was handed over to the IETF for open standardization. The IETF work involved proposals from other vendors, and development of a consensus protocol that combined features from several vendors' work.

One original motivation was to allow the creation of simple high-speed switches, since for a significant length of time it was impossible to forward IP packets entirely in hardware. However, advances in VLSI have made such devices possible. Therefore the advantages of MPLS primarily revolve around the ability to support multiple service models and perform traffic management. MPLS also offers a robust recovery framework that goes beyond the simple protection rings of synchronous optical networking (SONET/SDH).

While the traffic management benefits of migrating to MPLS are quite valuable (better reliability, increased performance), there is a significant loss of visibility and access into the MPLS cloud for IT departments.

How MPLS works
---------------------------------------
MPLS works by preappending packets with an MPLS header, containing one or more 'labels'. This is called a label stack.

Each label stack entry contains four fields:

-a 20-bit label value.
-a 3-bit field for QoS priority (experimental).
-a 1-bit bottom of stack flag. If this is set, it signifies the current label is the last in the stack.
-an 8-bit TTL (time to live) field.
These MPLS labeled packets are switched after a Label Lookup/Switch instead of a lookup into the IP table. As mentioned above, when MPLS was conceived, Label Lookup and Label Switching was faster than a RIB lookup because it could take place directly within the switched fabric and not the CPU.

The entry and exit points of an MPLS network are called Label Edge Routers (LER), which, respectively, push an MPLS label onto the incoming packet and pop it off the outgoing packet. Routers that perform routing based only on the label are called Label Switch Routers (LSR). In some applications, the packet presented to the LER already may have a label, so that the new LSR pushes a second label onto the packet. For more information see Penultimate Hop Popping.

In the specific context of a MPLS based Virtual Private Network (VPN), LSRs that function as ingress and/or egress routers to the VPN. are often called PE (Provider Edge) routers. Devices that function only as transit routers are similarly called P (Provider) routers. See RFC2547. The job of a P router is significantly easier than that of a PE router, so they can be less complex and may be more dependable because of this.

When an unlabeled packet enters the ingress router and needs to be passed on to an MPLS tunnel, the router first determines the forwarding equivalence class the packet should be in, and then inserts one or more labels in the packet's newly created MPLS header. The packet is then passed on to the next hop router for this tunnel.

When a labeled packet is received by an MPLS router, the topmost label is examined. Based on the contents of the label a swap, push (impose) or pop (dispose) operation can be performed on the packet's label stack. Routers can have prebuilt lookup tables that tell them which kind of operation to do based on the topmost label of the incoming packet so they can process the packet very quickly. In a swap operation the label is swapped with a new label, and the packet is forwarded along the path associated with the new label.

In a push operation a new label is pushed on top of the existing label, effectively "encapsulating" the packet in another layer of MPLS. This allows the hierarchical routing of MPLS packets. Notably, this is used by MPLS VPNs.

In a pop operation the label is removed from the packet, which may reveal an inner label below. This process is called "decapsulation". If the popped label was the last on the label stack, the packet "leaves" the MPLS tunnel. This is usually done by the egress router, but see PHP below.

During these operations, the contents of the packet below the MPLS Label stack are not examined. Indeed transit routers typically need only to examine the topmost label on the stack. The forwarding of the packet is done based on the contents of the labels, which allows "protocol independent packet forwarding" that does not need to look at a protocol-dependent routing table and avoids the expensive IP longest prefix match at each hop.

At the egress router, when the last label has been popped, only the payload remains. This can be an IP packet, or any of a number of other kinds of payload packet. The egress router must therefore have routing information for the packet's payload, since it must forward it without the help of label lookup tables. An MPLS transit router has no such requirement.

In some special cases, the last label can also be popped off at the penultimate hop (the hop before the egress router). This is called Penultimate Hop Popping (PHP). This may be interesting in cases where the egress router has lots of packets leaving MPLS tunnels, and thus spends inordinate amounts of CPU time on this. By using PHP, transit routers connected directly to this egress router effectively offload it, by popping the last label themselves.

MPLS can make use of existing ATM network infrastructure, as its labeled flows can be mapped to ATM virtual circuit identifiers, and vice-versa.

Installing and removing MPLS paths
---------------------------------------
There are two standardized protocols for managing MPLS paths: CR-LDP (Constraint-based Routing Label Distribution Protocol) and RSVP-TE, an extension of the RSVP protocol for traffic engineering. Also an extension of BGP protocol can be used to manage MPLS path.

An MPLS header does not identify the type of data carried inside the MPLS path. If one wants to carry two different types of traffic between the same two routers, with different treatment from the core routers for each type, one has to establish a separate MPLS path for each type of traffic.

Comparison of MPLS versus IP
---------------------------------------
MPLS cannot be compared to IP as a separate entity because it works in conjunction with IP and IP's IGP routing protocols. MPLS gives IP networks simple traffic engineering, the ability to transport Layer 3 (IP) VPNs with overlapping address spaces, and support for Layer 2 pseudo wires (with Any Transport Over MPLS, or ATOM - see Martini draft). Routers with programmable CPUs and without TCAM/CAM or another method for fast lookups may also see a limited increase in the performance.

MPLS relies on IGP routing protocols to construct its label forwarding table, and the scope of any IGP is usually restricted to a single carrier for stability and policy reasons. As there is still no standard for carrier-carrier MPLS it is not possible to have the same MPLS service (Layer2 or Layer3 VPN) covering more than one operator.

MPLS local protection
In the event of a network element failure when recovery mechanisms are employed at the IP layer, restoration may take several seconds which is unacceptable for real-time applications (such as VoIP). In contrast, MPLS local protection meets the requirements of real-time applications with recovery times comparable to those of SONET rings (up to 50ms).

Comparison of MPLS versus ATM
---------------------------------------
While the underlying protocols and technologies are different, both MPLS and ATM provide a connection-oriented service for transporting data across computer networks. In both technologies connections are signaled between endpoints, connection state is maintained at each node in the path and encapsulation techniques are used to carry data across the connection. Excluding differences in the signaling protocols (RSVP/LDP for MPLS and PNNI for ATM) there still remain significant differences in the behavior of the technologies.

The most significant difference is in the transport and encapsulation methods. MPLS is able to work with variable length packets while ATM transports fixed-length (53 byte) cells. Packets must be segmented, transported and re-assembled over an ATM network using an adaption layer, which adds significant complexity and overhead to the data stream. MPLS, on the other hand, simply adds a label to the head of each packet and transmits it on the network.

Differences exist, as well, in the nature of the connections. An MPLS connection (LSP) is uni-directional - allowing data to flow in only one direction between two endpoints. Establishing two-way communications between endpoints requires a pair of LSPs to be established. Because 2 LSPs are required for connectivity, data flowing in the forward direction may use a different path from data flowing in the reverse direction. ATM point-to-point connections (Virtual Circuits), on the other hand, are bi-directional, allowing data to flow in both directions over the same path (bi-directional are only svc ATM connections; pvc ATM connections are uni-directional).

Both ATM and MPLS support tunnelling of connections inside connections. MPLS uses label stacking to accomplish this while ATM uses Virtual Paths. MPLS can stack multiple labels to form tunnels within tunnels. The ATM Virtual Path Indicator (VPI) and Virtual Circuit Indicator (VCI) are both carried together in the cell header, limiting ATM to a single level of tunnelling.


The biggest single advantage that MPLS has over ATM is that it was designed from the start to be complementary to IP. Modern routers are able to support both MPLS and IP natively across a common interface allowing network operators great flexibility in network design and operation. ATM's incompatibilities with IP require complex adaptation making it largely unsuitable in today's predominantly IP networks.

MPLS deployment
---------------------------------------
MPLS is currently in use in large "IP Only" networks, and is standardized by IETF in RFC 3031.

In practice, MPLS is mainly used to forward IP datagrams and Ethernet traffic. Major applications of MPLS are Telecommunications traffic engineering and MPLS VPN.

Competitors to MPLS
---------------------------------------
MPLS can exist in both IPv4 environment (IPv4 routing protocols) and IPv6 environment (IPv6 routing protocols). The major goal of MPLS development - the increase of routing speed - is no longer relevant because of the usage of ASIC, TCAM and CAM based switching. Therefore the major usage of MPLS is to implement limited traffic engineering and Layer 3/Layer 2 “service provider type” VPNs over existing IPv4 networks. The only competitors to MPLS are technologies like L2TPv3 that also provide services such as service provider Layer 2 and Layer 3 VPNs.

IEEE 1355 is a completely unrelated technology that does something similar in hardware.

IPv6 references: Grosetete, Patrick, IPv6 over MPLS, Cisco Systems 2001; Juniper Networks IPv6 and Infranets White Paper; Juniper Networks DoD's Research and Engineering Community White Paper.

From Wikipedia, the free encyclopedia

Read More......

Thaicom 4 (IPSTAR)


Thaicom 4, also known as IPSTAR, is a broadband satellite built by Space Systems/Loral (SS/L) for Shin Satellite and was the heaviest commercial satellite launched as of August 2005. It was launched on August 11, 2005 from the European Space Agency's spaceport in French Guiana onboard the Ariane rocket. The satellite had a launch mass of 6486 kilograms. Thaicom 4 is from SS/L’s LS-1300 line of spacecraft.

The IPSTAR broadband satellite was designed for high-speed, 2-way broadband communication over an IP platform and is to play an important role in the broadband Internet/multimedia revolution and the convergence of information and communication technologies.


The satellite's 45 Gbit/s bandwidth capacity, in combination with its platform’s ability to provide an immediately available, high-capacity ground network with affordable bandwidth, allows for rapid deployment and flexible service locations within its footprint.

The IPSTAR system is comprised of a gateway earth station communicating over the IPSTAR satellite to provide broadband packet-switched communications to a large number of small terminals with network star configuration.

A wide-band data link from the gateway to the user terminal employs an Orthogonal Frequency Division Multiplexing (OFDM) with a Time Division Multiplex (TDM) overlay. These forward channels employ highly efficient transmission methods, including Turbo Product Code (TPC) and higher order modulation (L-codes) for increased system performance.

In the terminal-to-gateway direction (or return link), the narrow-band channels employ the same efficient transmission methods. These narrow-band channels operate in different multiple-access modes based on bandwidth-usage behavior, including Slotted-ALOHA, ALOHA, and TDMA for STAR return link waveform.


Spot Beam Coverage

Traditional satellite technology utilizes a broad single beam to cover entire continents and regions. With the introduction of multiple narrowly focused spot beams and frequency reuse, IPSTAR is capable of maximizing the available frequency for transmissions. Increasing bandwidth by a factor of twenty compared to traditional Ku-band satellites translates into better efficiencies. Despite the higher costs associated with spot beam technology, the overall cost per circuit is considerably lower as compared to shaped beam technology.


Dynamic Power Allocation

IPSTAR's Dynamic Power Allocation optimizes the use of power among beams and allocates a power reserve of 20 percent to be allocated to beams that may be affected by rain fade, thus maintaining the link.

From Wikipedia, the free encyclopedia

Read More......

Sunday, September 9, 2007

Storage area network (SAN)


In computing, a storage area network (SAN) is an architecture to attach remote computer storage devices (such as disk arrays, tape libraries and optical jukeboxes) to servers in such a way that, to the operating system, the devices appear as locally attached. Although cost and complexity is dropping, as of 2007, SANs are still uncommon outside larger enterprises.

By contrast to a SAN, network-attached storage (NAS) uses file-based protocols such as NFS or SMB/CIFS where it is clear that the storage is remote, and computers request a portion of an abstract file rather than a disk block.


Network types
----------------------------------------------
Most storage networks use the SCSI protocol for communication between servers and disk drive devices. However, they do not use SCSI low-level physical interface (e.g. cables), as its bus topology is unsuitable for networking. To form a network, a mapping layer is used to other low-level protocols:

-Fibre Channel Protocol (FCP), mapping SCSI over Fibre Channel. Currently the most common. Comes in 1 Gbit/s, 2 Gbit/s, 4 Gbit/s, 8 Gbit/s, 10 Gbit/s variants.
-iSCSI, mapping SCSI over TCP/IP.
-HyperSCSI, mapping SCSI over Ethernet.
-FICON mapping over Fibre Channel (used by mainframe computers).
-ATA over Ethernet, mapping ATA over Ethernet.
-SCSI and/or TCP/IP mapping over InfiniBand (IB).

Storage sharing
----------------------------------------------
The driving force for the SAN market is rapid growth of highly transactional data that require high speed, block-level access to the hard drives (such as data from email servers, databases, and high usage file servers). Historically, enterprises were first creating "islands" of high performance SCSI disk arrays. Each island was dedicated to a different application and visible as a number of "virtual hard drives" (or LUNs).

SAN essentially enables connecting those storage islands using a high-speed network.

However, an operating system still sees SAN as a collection of LUNs and is supposed to maintain its own file systems on them. Still, the most reliable and most widely used are the local file systems, which cannot be shared among multiple hosts. If two independent local file systems resided on a shared LUN, they would be unaware of the fact, would have no means of cache synchronization and eventually would corrupt each other. Thus, sharing data between computers through a SAN requires advanced solutions, such as SAN file systems or clustered computing.

Despite such issues, SANs help to increase storage capacity utilization, since multiple servers share the same growth reserve on disk arrays.

In contrast, NAS allows many computers to access the same file system over the network and synchronizes their accesses. Lately, the introduction of NAS heads allowed easy conversion of SAN storage to NAS.

Benefits
----------------------------------------------
Sharing storage usually simplifies storage administration and adds flexibility since cables and storage devices do not have to be physically moved to move storage from one server to another.

Other benefits include the ability to allow servers to boot from the SAN itself. This allows for a quick and easy replacement of faulty servers since the SAN can be reconfigured so that a replacement server can use the LUN of the faulty server. This process can take as little as half an hour and is a relatively new idea being pioneered in newer data centers. There are a number of emerging products designed to facilitate and speed up this process still further. For example, Brocade offers an Application Resource Manager product which automatically provisions servers to boot off a SAN, with typical-case load times measured in minutes. While this area of technology is still new, many view it as being the future of the enterprise datacenter.

SANs also tend to enable more effective disaster recovery processes. A SAN could span a distant location containing a secondary storage array. This enables storage replication either implemented by disk array controllers, by server software, or by specialized SAN devices. Since IP WANs are often least costly method of long-distance transport, the Fibre Channel over IP (FCIP) and iSCSI protocols have been developed to allow SAN extension over IP networks. The traditional physical SCSI layer could only support a few meters of distance - not nearly enough to ensure business continuance in a disaster. Demand for this SAN application has increased dramatically after the September 11th attacks in the United States, and increased regulatory requirements associated with Sarbanes-Oxley and similar legislation.

Consolidation of disk arrays economically accelerated advancement of some of their advanced features. Those include I/O caching, snapshotting, volume cloning (Business Continuance Volumes or BCVs).

SAN infrastructure
----------------------------------------------
SANs often utilize a Fibre Channel fabric topology - an infrastructure specially designed to handle storage communications. It provides faster and more reliable access than higher-level protocols used in NAS. A fabric is similar in concept to a network segment in a local area network. A typical Fibre Channel SAN fabric is made up of a number of Fibre Channel switches.

Today, all major SAN equipment vendors also offer some form of Fibre Channel routing solution, and these bring substantial scalability benefits to the SAN architecture by allowing data to cross between different fabrics without merging them. These offerings use proprietary protocol elements, and the top-level architectures being promoted are radically different. They often enable mapping Fibre Channel traffic over IP or over SONET/SDH.

Compatibility
----------------------------------------------
One of the early problems with Fibre Channel SANs was that the switches and other hardware from different manufacturers were not entirely compatible. Although the basic storage protocols FCP were always quite standard, some of the higher-level functions did not interoperate well. Similarly, many host operating systems would react badly to other operating systems sharing the same fabric. Many solutions were pushed to the market before standards were finalized and vendors innovated around the standards.

The combined efforts of the members of the Storage Networking Industry Association (SNIA) improved the situation during 2002 and 2003. Today most vendor devices, from HBAs to switches and arrays, interoperate nicely, though there are still many high-level functions that do not work between different manufacturers’ hardware.

SANs at home
----------------------------------------------
SANs are primarily used in large scale, high performance enterprise storage operations. It would be unusual to find a single disk drive connected directly to a SAN. Instead, SANs are normally networks of large disk arrays. SAN equipment is relatively expensive, therefore, Fibre Channel host bus adapters are rare in desktop computers. The iSCSI SAN technology is expected to eventually produce cheap SANs, but it is unlikely that this technology will be used outside the enterprise data center environment. Desktop clients are expected to continue using NAS protocols such as CIFS and NFS. The exception to this may be remote storage replication.

SANs in the Media and Entertainment
----------------------------------------------
Video editing workgroups require very high data rates. Outside of the enterprise market, this is one area that greatly benefits from SANs.

Per-node bandwidth usage control, sometimes referred to as quality-of-service (QoS), is especially important in video workgroups as it lets you ensure a fair and prioritized bandwidth usage across your network. Avid Unity and Tiger Technology MetaSAN are specifically designed for video networks and offer this functionality.

Storage virtualization and SANs
----------------------------------------------
Storage virtualization refers to the process of completely abstracting logical storage from physical storage. The physical storage resources are aggregated into storage pools, from which the logical storage is created. It presents to the user a logical space for data storage and transparently handles the process of mapping it to the actual physical location. This is of course naturally implemented inside each modern disk array, using vendor's proprietary solution. However, the goal is to virtualize multiple disk arrays, made by different vendors, scattered over the network, into a single monolithic storage device, which can be managed unifromly.


Read More......

Surface Computing (Microsoft Surface)


Microsoft Surface is a forthcoming product from Microsoft which is developed as a software and hardware combination technology that allows a user, or multiple users, to manipulate digital content by the use of natural motions, hand gestures, or physical objects. It was announced on May 29, 2007 at D5, and is expected to be released by commercial partners in November 2007. Initial customers will be in the hospitality businesses, such as restaurants, hotels, retail, and public entertainment venues.


Overview
-------------------------------------------------
Surface is essentially a Windows Vista PC tucked inside a black table base, topped with a 30-inch touchscreen in a clear acrylic frame. Five cameras that can sense nearby objects are mounted beneath the screen. Users can interact with the machine by touching or dragging their fingertips and objects such as paintbrushes across the screen, or by setting real-world items tagged with special barcode labels on top of it.

Surface has been optimized to respond to 52 touches at a time. During a demonstration with a reporter, Mark Bolger, the Surface Computing group's marketing director, "dipped" his finger in an on-screen paint palette, then dragged it across the screen to draw a smiley face. Then he used all 10 fingers at once to give the face a full head of hair.

In addition to recognizing finger movements, Microsoft Surface can also identify physical objects. Microsoft says that when a diner sets down a wine glass, for example, the table can automatically offer additional wine choices tailored to the dinner being eaten.

Prices will reportedly be $5,000 to $10,000 per unit. However Microsoft said it expects prices to drop enough to make consumer versions feasible in 3 to 5 years.

The machines, which Microsoft debuted May 30, 2007 at a technology conference in Carlsbad, California, are set to arrive in November in T-Mobile USA stores and properties owned by Starwood Hotels & Resorts Worldwide Inc. and Harrah's Entertainment Inc.

History
-------------------------------------------------
The technology behind Surface is called Multi-touch. It has at least a 25-year history, beginning in 1982, with pioneering work being done at the University of Toronto (multi-touch tablets) and Bell Labs (multi-touch screens). The product idea for Surface was initially conceptualized in 2001 by Steven Bathiche of Microsoft Hardware and Andy Wilson of Microsoft Research. In October 2001, a virtual team was formed with Bathiche and Wilson as key members, to bring the idea to the next stage of development.

In 2003, the team presented the idea to the Microsoft Chairman Bill Gates, in a group review. Later, the virtual team was expanded and a prototype nicknamed T1 was produced within a month. The prototype was based on an IKEA table with a hole cut in the top and a sheet of architect vellum used as a diffuser. The team also developed some applications, including pinball, a photo browser and a video puzzle. Over the next year, Microsoft built more than 85 early prototypes for Surface. The final hardware design was completed in 2005.

A similar concept was used in the 2005 Science Fiction movie The Island, by Sean Bean's character "Merrick". As noted in the DVD commentary, the director Michael Bay stated the concept of the device came from consultation with Microsoft during the making of the movie. One of the film's technology consultant's associates from MIT later joined Microsoft to work on the Surface project.

Surface was unveiled by Microsoft CEO Steve Ballmer on May 29, 2007 at The Wall Street Journal's D: All Things Digital conference in Carlsbad, California.Surface Computing is part of Microsoft's Productivity and Extended Consumer Experiences Group, which is within the Entertainment & Devices division. The first few companies to deploy Surface will include Harrah's Entertainment, Starwood Hotels & Resorts Worldwide, T-Mobile and a distributor, International Game Technology.

Features
-------------------------------------------------
Microsoft notes four main components being important in Surface's interface: direct interaction, multi-touch contact, a multi-user experience, and object recognition. The device also enables drag and drop digital media when wi-fi enabled devices are placed on its surface such as a Microsoft Zune, cellular phones, or digital cameras.

Surface features multi-touch technology that allows a user to interact with the device at more than one point of contact. For example, using all of their fingers to make a drawing instead of just one. As an extension of this, multiple users can interact with the device at once.

The technology allows non-digital objects to be used as input devices. In one example, a normal paint brush was used to create a digital painting in the software. This is made possible by the fact that, in using cameras for input, the system does not rely on restrictive properties required of conventional touchscreen or touchpad devices such as the capacitance, electrical resistance, or temperature of the tool used (see Touchscreen).

The computer's "vision" is created by a near-infrared, 850-nanometer-wavelength LED light source aimed at the surface. When an object touches the tabletop, the light is reflected to multiple infrared cameras with a net resolution of 1280 x 960, allowing it to sense, and react to items touching the tabletop.

Surface will ship with basic applications, including photos, music, virtual concierge, and games, that can be customized for the customers.

Specifications
-------------------------------------------------
Surface is a 30-inch (76 cm) display in a table-like form factor, 22 inches (56 cm) high, 21 inches (106 cm) deep, and 84 inches (214 cm) wide. The Surface tabletop is acrylic, and its interior frame is powder-coated steel. The software platform runs on Windows Vista and has wired Ethernet 10/100, wireless 802.11 b/g, and Bluetooth 2.0 connectivity.



Read More......

Electronic Product Code

The Electronic Product Code, (EPC), is a family of coding schemes created as an eventual successor to the bar code. The EPC was created as a low-cost method of tracking goods using RFID technology. It is designed to meet the needs of various industries, while guaranteeing uniqueness for all EPC-compliant tags. EPC tags were designed to identify each item manufactured, as opposed to just the manufacturer and class of products, as bar codes do today. The EPC accommodates existing coding schemes and defines new schemes where necessary.

The EPC was the creation of the MIT Auto-ID Center, a consortium of over 120 global corporations and university labs. The EPC system is currently managed by EPCglobal, Inc., a subsidiary of GS1, creators of the UPC barcode.

The Electronic Product Code promises to become the standard for global RFID usage, and a core element of the proposed EPCglobal Network.


Structure
--------------------------------
All EPC numbers contain a header identifying the encoding scheme that has been used. This in turn dictates the length, type and structure of the EPC. EPC encoding schemes frequently contain a serial number which can be used to uniquely identify one object.

EPC Version 1.3 supports the following coding schemes:

-General Identifier (GID) GID-96
-a serialized version of the GS1 Global Trade Item Number (GTIN) SGTIN-96 SGTIN-198
-GS1 Serial Shipping Container Code (SSCC) SSCC-96
-GS1 Global Location Number (GLN), SGLN-96 SGLN-195
-GS1 Global Returnable Asset Identifier (GRAI) GRAI-96 GRAI-170
-GS1 Global Individual Asset Identifier (GIAI) GIAI-96 GIAI-202 and
-DOD Construct DoD-96

From Wikipedia, the free encyclopedia

Read More......

Thursday, September 6, 2007

Near Field Communication


Near Field Communication or NFC, is a short-range wireless technology which enables the communication between devices over a short distance (hands width). The technology is primarily aimed at usage in mobile phones.

NFC is compatible with the existing contactless infrastructure already in use for public transportation and payment.

Essential specifications
--------------------------------------------
-Works by magnetic field induction. It operates within the globally available and unlicensed RF band of 13.56 MHz.
-Working distance: 0-20 centimeters
-Speed: 106 kbit/s, 212 kbit/s or 424 kbit/s
-There are two modes:
-Passive Communication Mode: The Initiator device provides a carrier field andthe target device answers by modulating existing field. In this mode, the Target device may draw its operating power from the Initiator-provided electromagnetic field, thus making the Target device a transponder.
-Active Communication Mode: Both Initiator and Target device communicate by generating their own field. In this mode, both devices typically need to have a power supply.
-NFC can be used to configure and initiate other wireless network connections such as Bluetooth, Wi-Fi or Ultra-wideband.
A patent licensing program for NFC is currently under development by Via Licensing Corporation, an independent subsidiary of Dolby Laboratories.


Uses and applications
--------------------------------------------
NFC technology is currently mainly aimed at being used with mobile phones. There are three main use cases for NFC:

-card emulation: the NFC device behaves like an existing contactless card)
-reader mode: the NFC device is active and read a passive RFID tag, for example for interactive advertising)
-P2P mode: two NFC devices are communicating together and exchanging information.)
Plenty of applications will be possible such as:

-Mobile ticketing in public transport - an extension of the existing contactless infrastructure.
-Mobile Payment - the mobile phone acts as a debit/ credit payment card.
-Smart poster - the mobile phone is used to read RFID tags on outdoor billboards in -order to get info on the move.
-Bluetooth pairing - in the future pairing of Bluetooth 2.1 devices with NFC support will be as easy as bringing them close together and accepting the pairing. The process of activating Bluetooth on both sides, searching, waiting, pairing and authorization will be replaced by a simple "touch" of the mobile phones.
Other applications in the future could include:

-Electronic tickets – airline tickets, concert/event tickets, and others
-Electronic money
-Travel cards
-Identity documents
-Mobile commerce
-Electronic keys – car keys, house/office keys, hotel room keys, etc

Standardization bodies & industry projects
--------------------------------------------
Standards
It was approved as an ISO/IEC standard on December 8, 2003 and as an ECMA standard later on.

NFC is an open platform technology standardized in ECMA-340 and ISO/IEC 18092. These standards specify the modulation schemes, coding, transfer speeds and frame format of the RF interface of NFC devices, as well as initialization schemes and conditions required for data collision-control during initialization-for both passive and active NFC modes. Furthermore, they also define the transport protocol, including protocol activation and data-exchange methods. Air interface for NFC is standardized in:

-ISO/IEC 18092/ECMA-340 : Near Field Communication Interface and Protocol-1(NFCIP-1)
-ISO/IEC 21481/ECMA-352 : Near Field Communication Interface and Protocol-2 (NFCIP-2)
NFC Forum has in addition defined a common data format called NDEF, which can be used to store and transport different kinds of items, ranging from any MIME-typed object to ultra-short RTD -documents, such as URLs.

NDEF is conceptually very similar to MIME. It is a dense binary format of so-called "records", in which each record can hold a different type of object. By convention, the type of the first record defines the context of the entire message.

NFC Forum
The NFC Forum is a non-profit industry association founded on March 18, 2004 by NXP Semiconductors, Sony Corporation and Nokia Corporation to advance the use of NFC short-range wireless interaction in consumer electronics, mobile devices and PCs. The NFC Forum will promote implementation and standardization of NFC technology to ensure interoperability between devices and services. In July 2007, there were over 115 members of the NFC Forum.

GSMA
The GSM Association (GSMA) is the global trade association representing 700 mobile phone operators across 218 countries of the world.

They have launched two initiatives:

-the Mobile NFC initiative: fourteen mobile network operators, who together represent 40% of the global mobile market back NFC and are working together to develop NFC applications. They are Bouygues Télécom, China Mobile, Cingular Wireless, KPN, Mobilkom Austria, Orange, SFR, SK Telecom, Telefonica Móviles España, Telenor, TeliaSonera, Telecom Italia Mobile (TIM), Vodafone and 3 .
On 13th February 2007, they published a white paper on NFC to give the point of view of mobile operators on the NFC ecosystem .

-the Pay buy mobile initiative seeks to define a common global approach to using Near Field Communications (NFC) technology to link mobile devices with payment and contactless systems. To date, 30 mobile operators have joined this initiative.

StoLPaN
StoLPaN (‘Store Logistics and Payment with NFC’) is a pan-European consortium supported by the European Commission’s Information Society Technologies program. StoLPaN will examine the as yet untapped potential for bringing together the new kind of local wireless interface, NFC and mobile communication.

Other standardization bodies
Other standardization bodies are involved in NFC:

-ETSI / SCP (Smart Card Platform) to specify the interface between the SIM card and the NFC chipset.
-Global Platform to specify a multi-application architecture of the secure element.
-|EMVCo for the impacts on the EMV payment applications.

Read More......