The Second Generation (2G) networks were started at the end of 1980s. The main difference of this generation from the 1G was its allowance of data capabilities, in addition to the voice services. The initial data capacity was low. Hence, the key enhancements were made for bandwidth’s improvement. In order to enable the transformation process, the basic architecture of networks in 2G incorporated Time Division Multiple Access (TDMA) as well as Code Division Multiple Access (CDMA) as the alternatives of circuit switched networks in 1G. Code division basically means that multiple users can access the same channel at the same time but with different coding for each user’s stream (Dahlman, Parkvall & Per Beming 2008). The data is encoded before transmission and then decoded in the receiving end in order to reveal as many channels as were encoded into the signal. Time division works rather like code division but with short lapses of time for each user in the same channel. The bursts are so short and frequent that voice and other output are not significantly altered (Lescuyer & Lucidarme 2008).
In addition to CDMA and TDMA, there were two other networks in the 2G category: D-AMPS and PDC. The latter was used only in Japan as the compatible network for its 1G NTT platform. D-AMPS (Digital Advanced Mobile Phone System) is fully digital. It was structured to accommodate the newer digital channels, as well as the analogue channels in the single cell. The MSC allocates the available bandwidth accordingly. D-AMPS operates from 1850 to 1910 MHz for upstream and from 1930 to 1990 Mhz range for downstream traffic. In D-AMPS, the standard waves were shorter at 16cm allowing the shorter ¼ wave antenna of 4 cm to be on the mobile phone (Myung 2010). This transformation in the mobiles has led to the smaller and lighter phones. In the digital realm, voice in the microphone is digitized using the complex algorithms inside the phone’s module before being sent through the network. The digitization process also compresses the audio band from the 56Kbps in the analogue systems to only 8Kbps in the 2G networks. It frees up bandwidth by a factor of 7 through more channel capacity for the 2G networks (Ergen 2009). In D-AMPS, three users may share single frequency through time division supporting 25 pulses per second for each pair of frequencies. It should be noted that frame (pulse) has 6 time slots and 40milliseconds on the whole (Dahlman, Parkvall & Per Beming 2008).
In the first arrangement, three users share a TDM time frame receiving 324 bit allocation for each user (Ekstrom et al 2006). 64 bits of those are used for the network control and decision-making; 101 bits are used for parity error correction; 159 bits incorporate the actual user data such as voice. The second arrangement shows six users sharing the same TDM frame. In order to achieve this capacity, compression has to be done from 8Kbps to 4Kbps (Dahlman, Parkvall & Per Beming 2008). In D-AMPS, the mobile device actually assists the MSC in hand-offs by taking the idle time of frame for the comparison of signal strengths. If it discovers the stronger signal than that from its local BTS, it seeks to be handed over to it. This scenario is called Mobile Assisted Handoff (MAHO). D-AMPS is used mainly in Japan and the U.S., while GSM is practiced everywhere else in the world. GSM is similar to D-AMPS in many aspects. However in GSM, the mobile receives frequency of 55MHz higher than it can transmit, unlike in D-AMPS, where the mobile receives at 80MHz higher than it hands on. In addition, the GSM channels are wider than those of D-AMPS for 170 KHz that allows the higher data rates per user. The below section will discuss GSM in detail (Dahlman, Parkvall & Per Beming 2008).
GSM is the acronym of Global System for Mobile Communication, a system that has every channel of 124 frequency pairs (Ergen 2009). Each channel holds 8 separate connections and has a bandwidth of 200 KHz. Any active mobile has assigned to it one slot in one channel pair. Even though the cell may support as many as 992 channels, only a fraction of them is utilized in order to avoid the traffic conflicts with the neighboring cells. GSM stations cannot transmit and receive simultaneously. Therefore, GSM is taken up in switching from one mode to the other at times (Ergen 2009). Consequently, four of the eight time slots allocated per connection are used in either direction.
In the figure, the shaded slots indicate the allocation of time series for the mobile station. The device inputs its traffic into each slot successively until all the payload will be sent or received. Only one slot per frame is dedicated for user traffic mainly in order to avoid the conflict, as it was mentioned above. Every slot contains one 148-bit data frame, which takes up 577 microseconds in time. Every frame must begin and end with three 000 bits for the delineation purposes. In addition, each frame contains a double information field of 57 bit immediately followed by control bits for nature characterization of the following frame (whether voice or data). The information fields are separated by a 26 bit synchronization region, during which a receiver has time to synch up its transmission properties in accordance with those of the sender.
From the figure, the GSM multi-frame is made up of twenty six 1250-bit TDM frames, which are made up of seven 148-bit data frames in their turn. Each transmitter shares a frame with seven others. Thus, its maximum data rate is limited to 33.854 Kbps that is more than twice over the rate of D-AMPS 1G technology. Additionally, another multi-rame with 51 slots is used with GSM. These slots are used for other network control services. The Broadcast Control Channel (BCC) is one of such services, which continually broadcasts the base station’s location, signal strength, and other statuses. This information can help the mobile station know, when it moves to the new cell.
This term is the synonym for International Standard 95that is mostly used to describe the Code Division Multiple Access (CDMA) network architecture. CDMA ONE is also used to indicate the type of CDMA network. The essential difference between CDMA and GSM is that it can use the entire spectrum instead of the transmitter using a small fraction of allocated frequency to send data. In the meantime, the architecture uses multiple coding for encryption each of the transmitter outputs before feeding them into one channel. According to the coding theory, multiple signals of the user do not collide, when they are added together. The signals add up linearly in such a way that allows separation in the receiving end. Each bit time in CDMA is split into the chips, typically 64 or 128 (Sesia & Toufik 2011).
It is worth to mention that every device is assigned a chip sequence, in which it may transmit in the form of binary string. In order to transmit a 0, the device sends its sequential approval. For example, the string may be 00001101 for 1 bit data output or 00001100 for 0 bit output, and no other protocol is allowed (Ekstrom et al 2006). Therefore, there are no conflicts of channel allocation in CDMA, because each transmitter may utilize the full bandwidth during its active slot. The strength of signal from the BTS to the mobile device is proportional to the distance between them. The general rule for the mobile station is to transmit at the inverse of power level from the station. Moreover, the station may direct the near-situated devices to lower their signal levels in order to avoid the weak output signals from the far located mobile devices. CDMA band is typically 1.25 MHz that is higher than 1G networks and GSM in many times (Lescuyer & Lucidarme 2008).
In the early development of 2G networks, which also was known as 2.5G, the communication platform identified as Enhanced Data for GSM Revolution (EDGE) was formed. This network allowed more bits per slot in the uplink as well as in the downlink. Thus, the higher speeds became possible than those in the GSM. The main drawback of EDGE is that with the increase in bit-rate the errors also increase. Thus, a significant amount of resources has to be allocated for the error correction. The General Packet Radio Service (GPRS) is another 2.5G network. This network type is the overlay of packet transmission over the original voice structures in either D-AMPS or GSM. The additional packet bits allocated in the channel requires the intelligent coordination of voice versus data in the system resource allocation. This is normally done by BTS in accordance with the voice versus data requirements at any time. Usually, the voice is given the priority. That is why a data session in the GPRS network will be stopped, if the device user either initiates or hosts the voice call (Sesia & Toufik 2011).
The third generation communication platform revolutionized the wireless communication domain by enabling the integration of high bandwidth with newer features support. These features include the capabilities of e-mail, the high gaming level, media playback and data streaming among others. This section will discuss 3G and the specific additional features that differentiate it from 2G. Furthermore, it will also discuss the various networks that are categorized in the 3G bracket including UMTS, CDMA2000, W-CDMA, and HSDPA (Dahlman, Parkvall & Per Beming 2008).
UMTS: Universal Mobile Telecommunications System
The UMTS is the mobile communications network of the third generation that was launched in 2002 on the GSM base. Rather than the new system, UMTS has been largely seen as the upgrade of earlier systems, since the development packet of GSM was done through EDGE and GPRS, both considered as 2.5G networks. UMTS was designed and supported by the 3GPP group as a componentof IMT-2000 (International Telecommunications Union- 2000) (Myung 2010). It uses wideband CDMA for the greater efficiency of bandwidth. This new technology employs the new facilities, as it will be discussed below.
User Equipment (UE)
The user equipment is the broader name for the mobile phone among other equipment, as the user equipment may handle more functions with the enhanced network support. For instance, the original device supported only voice and the paging functions. In the 2G networks additionally to these two functions, the mobile stations could also support the rudimentary packet transfer system, which was embedded as the element of circuit switched network (GSM/GPRS) (Myung 2010). In the third generation networks, the mobile devices could support all listed above functions and others in addition, which require the higher data rates such as the internet gaming chat and the certain forms of video streaming. In the 4G system, the user equipment supports even more functions including Mobile TV (Lescuyer & Lucidarme 2008).
Radio Network Subsystem (RNS)
This system replaces the Base Transmission System (BTS) and provides the network management functions such as traffic handling, resource allocation, and roaming. Moreover, it controls the Node B and the Radio Network Controller (RNC), which encrypts and decrypts information. The Node B is the replacement of the BTS and handles call routing.
The Core Network
This is the central network processing unit. It replaces the Network Switching System (NSS). This architecture contains both the packet-switched and circuit-switched elements. The circuit-switched parts comprise the Gateway MSC (GMSC) as well as the Mobile Switching Center (MSC). These elements operate as they did it in the GSM network. The packet-switched components include the GGSN and the SGSN. The SGPRSSN is in charge for packet mobility management, active session management, and billing. The Gateway GPRS Support Node is the link between the external networks and the core network. It is responsible for protocol harmonization, data filtering and authentication among other functions.
Theoretically, the UMTS supports a maximum data rate of 384Kbps and a peak data rate of 42Mbps with the implemented HSPA, which translates to the speeds up to 7.2Mbps for the handsets with HSDPA support. The majority of countries have already upgraded their UMTS networks to HSDPA in order to enable the faster data transfer rates, usually with the level of 3.5G.
CDMA-2000 is mainly the American 3G equivalent. It was evolved from IS-95, which was represented by CDMA. According to Sesia & Toufik (2011), CDMA-2000 can also be referred to IMT-Multi Carrier (IMT-MC). CDMA- 2000 is the evolutionary name for a number of advancements performed on the CDMA architecture, upon which it is based. The evolution from CDMA to CDMA-2000 took the following stages: CDMA2000 1xRTT, CDMA2000 1xEV-DO (with 3 releases 0, A, and B), the release C that is also called Ultra Mobile Broadband (UMB), and the CDMA2000 1xEV-DV.
The CDMA2000 1x also called IS-2000 is the core wireless interface standard for CDMA and has the same Radio Frequency Bandwidth as the IS-95 standard (thus, the 1x). However, the 1xRTT was developed for the increase of uplink bandwidth by the addition of 64 more orthogonal channels with the existing 64 effectively doubling the network’s capacity. The 1X enhancement supports the network data rates up to 154Kbps and the actual data rates between 80- 100 Kbps (Dahlman, Parkvall & Per Beming 2008).
This revolution uses both the CDMA as well as TDMA for the high speed internet access (Myung 2010). Many mobile networks use it, especially where the high speed internet is required. Its third revision (Release C) is called the Ultra Mobile Broadband and preceded the 4G layout. This has been practiced extensively for the high speed mobile browsing with such features as gamming support and video streaming of end users through the compatible mobile stations (Sesia & Toufik 2011).
IMT-2000 is a wireless communication standard approved in 2008 that is basically laid down the requirements for the 3G network. However, IMT-Advanced proceeded to extend the requirements on 3G networks and devices in order to offer mobile broadband on smart phones, wireless laptop modems, and other machines. It is also expected that such a network should support gaming, video streaming and chat, Multi Media messaging service (MMS), mobile TV, and HDTV (Furht 2009). The properties of the IMT Advanced standard include the following points: it is a packet-switched network based on the Internet Protocol (IP) concept; it offers the high speed rates of 1Gbps, when the user and the base station are stationary, and up to 1Mbps, when user moves at high speed regarding the base station;
Spectral efficiency of 15bits/Hz in the downlink and 6Bs/Hz in the uplink;
Seamless link between the networks and smooth handovers;
The high quality multimedia support (Lescuyer & Lucidarme 2008).
by Top 30 writers 10.95 USD Get
VIP Support 9.99 USD Get an order
Proofread by editor 3.99 USD Get
extended REVISION 2.00 USD Get SMS NOTIFICATIONS 3.00 USD Get a full
PDF plagiarism report 5.99 USD
3G Evolution to 4G
The increasingly demanding IT platform forced the researchers to further advance the equipment and network abilities in order to come up with the systems that would support even higher data rates, better handle user mobility and handovers, more services and programs and guarantee a longer period of reliability (Sesia & Toufik 2011). These conditions were the driving factors of industry towards a new wireless communication experience of 4G. This concept means the performance of certain changes according to the existing 3G facilities and standards or even the entirely new machines. As it was expected, the major changes would be along the dimension of data rates (both uplink and downlink), the variety of services or programs that could operate in the network, and the advanced multimedia support among other factors (Dahlman, Parkvall & Per Beming 2008).
Figure 6: Evolution of Wireless Networks
(Sesia & Toufik 2011)
The first change required the network support of channels with radio frequencies up to 40Mbps and with maximum spectral efficiency. AMT-Advanced had already envisioned these changes. In addition, spectral efficiency was expected to hit a peak of 2.2bps/HZ/cell and support mobility at speeds up to 350Km/h. As earlier stated, the main task behind the migration of 3G into 4G was the Third Generation Partnership Project (3GPP), which ensured the roll-over into 4G at the end of 2010 (Guerra & Ramlogan 2008). While some aspects of the platform change were new and unprecedented, the numerous changes were just the upgrades of existing mechanisms. What is more, some of the 4G envisioned attributes have not been realized yet. The section below will discuss the length of the fourth generation wireless networks including the properties, variations, current usability and its future (Guerra & Ramlogan 2008).