Thursday, October 31, 2019

Construction Defects with Homeowners Case Study

Construction Defects with Homeowners - Case Study Example The third year and for up to the tenth, the major structural defects are covered including foundation walls, load-bearing portions, supporting beams and foundation footings. The homeowner should file for claim in the covered period, but may notify the local construction official for foundation damages that may or may no longer be covered by the warranty. The law covering construction of foundation is Title 5 Community Affairs Chapter 25 Regulations Governing New Home Warranties and Builders’ Registration or N.J.A.C. 5:25. Specifically, the Act â€Å"prescribe the form and coverage of the minimum warranty established by the Act; govern procedures for the implementation and processing of claims pursuant to the warranty; establish requirements for registration as a builder, and procedures governing the denial, revocation and suspension of builders registration; and, establish the requirements of private alternate. Adams (2010) cited many builder-contractor liabilities in the cas e where foundation issues occur among homeowners. Home building foundations usually last for tens or even hundreds of years when done properly. But â€Å"serious and difficult to fix [†¦] if built poorly [†¦and] threaten the stability of the home and the homeowner’s investment,† (Adams, 2010, P 1). One of the more critical issues about foundation problems is that it only becomes apparent after several years of completion and even occupancy of the home. The homeowner may be left unsure of what recourse may be available.

Tuesday, October 29, 2019

How successful was Josip Broz Tito as a ruler of Yugoslavia Essay

How successful was Josip Broz Tito as a ruler of Yugoslavia - Essay Example He was imprisoned in the Petrovaradin fortress after being arrested for anti-war propaganda. Still a prisoner of war, Tito was sent to Galicia to fight against Russia. A howitzer shell seriously injured him in Bukovina. Russia claimed the whole battalion in April of 1915. Josip Broz Tito spent several months in the hospital as he recovered from his injuries. After his recovery, he went to work camp at Ural Mountains in the fall of 1916. During April of 1916, he organized demonstrations for prisoners of war and was arrested. He eventually escaped. He resumed his demonstrations by joining in Saint Petersburg on July 16, 1917 and July 17, 1917. He tried to flee to Finland to escape being arrested, but he was sent to prison in Petropavlovsk fortress three weeks after the demonstrations. He was in prison in a camp in Kungur. He escaped by the train. In November 1917, he went to Omsk, Siberia and enlisted in the Red Army. During the spring of 1918, he completed an application to join the Russian Communist Party. He was granted membership in 1920 not long before the Communist Party of Yugoslavia was banned. The Communist Party of Yugoslavia’s influence on the political arena of the Kingdom of Yugoslavia was insignificant. Josip Broz Tito eventually b ecame a member of the Political Bureau of the Central Committee of the Party in 1934. In April of 1941, Yugoslavia was invaded by the Axis Forces. The Communist Party organized a resistance movement. Tito demanded a public call for armed resistance against the Germans. The Yugoslav National Liberation Army named Tito the Chief Commander. According to the article, â€Å"the NLA partisans staged a wide-spread guerrilla campaign and started liberating chunks of territory in which they organized peoples committees to act as civilian government.† (Historymania.com). He was the main leader of the Anti-Fascist Council of National Liberation of Yugoslavia. The organization convened in Bihac in

Sunday, October 27, 2019

Delta Modulation And Demodulation Computer Science Essay

Delta Modulation And Demodulation Computer Science Essay A modem to improve communication system performance that uses multiple modulation scheme comprising modulation technique and encoder combinations. As communication system performance and objective change, different modulation schemes may be selected. Modulation schemes may also be selected upon the communication channel scattering function estimate and the modem estimates the channel scattering function from measurements of the channels frequency (Doppler) and time (multipath) spreading characteristics. An Adaptive sigma delta modulation and demodulation technique, wherein a quantizer step size is adapted based on estimates of an input signal to the quantizer, rather than on estimates of an input signal to the modulator. A technique for digital conferencing of voice signals in systems using adaptive delta modulation (ADM) with an idle pattern of alternating 1s and 0s has been described. Based on majority logic, it permits distortion-free reception of voice of a single active subscriber by all the other subscribers in the conference. Distortion exists when more than one subscriber is active and the extent of this distortion depends upon the type of ADM algorithm that has been used. An LSI oriented system based on time sharing of a common circuit by a number of channels has been implemented and tested. This technique, with only minor changes in circuitry, handles ADM channels that have idle patterns different from alternating single 1s and 0s. This method used for noise reduction. The modulator factor does not require a large amount of data to be represented. Representation is based upon a frequency domain function having particular characteristics. A preferred embodiment of the invention incorporates transform or sub band filtered signals which are transmitted as a modulated analog representation of a local region of a video signal. The modulation factor reflects the particular characteristic. Side information specifies the modulation factor 1.2. Aim: Digital techniques to wirelessly communicate voice information. Wireless environments are inherently noisy, so the voice coding scheme chosen for such an application must be robust in the presence of bit errors. Pulse Coded Modulation (PCM) and its derivatives are commonly used in wireless consumer products for their compromise between voice quality and implementation cost. Adaptive Delta Modulation (ADM) is another voice coding scheme, a mature technique that should be considered for these applications because of its bit error robustness and its low implementation cost. Bandpass modulation techniques encode information as the amplitude, fre ­quency, phase, or phase and amplitude of a sinusoidal carrier. These band ­pass modulation schemes are known by their acronyms ASK (amplitude shift keying), FSK (frequency shift keying), PSK (phase shift keying), and QAM (qua ­ternary amplitude modulation), where keying or modulation is used to indicate that a carrier signal is modified in some manner. The carrier is a sinusoidal signal that is initially devoid of any information. The purpose of the carrier is to translate essentially a baseband information signal to a frequency and wavelength that can be sent with a guided or propagating electro ­magnetic (EM) wave. Bandpass ASK is similar to baseband pulse amplitude modulation (PAM) in Chapter 2, Baseband Modulation and Demodulation, but FSK, PSK, and DM are new non-linear modulation techniques. ASK, FSK, and PSK can be readily extended to multiple level (M-ary) signaling and demodulated coherently or non-coherently. The optimum receiver for bandpass symmetrical or asymmetrical sig ­nals is the correlation receiver, which is developed for baseband signals in Chapter 2. Coherent demodulation uses a reference signal with the same frequency and phase as the received signal. No coherent demodulation of bandpass signaling may use differential encoding of the information to derive the reference signal in the correlation receiver. The observed bit error rate (BER) for a single, in a MATLAB simulation for several bandpass digital communication systems with coherent and non coherent correlation receivers is compared to the theoretical probability of bit error (Pb). Digital communication systems are subject to performance degrada ­tions with additive white Gaussian noise (AWGN). MATLAB simulations of bandpass communication systems are used to investigate the effect upon BER of the performance of the correlation receiver, the reduction in BER with Gray-coding of M-ary data, and binary and quaternary differential signaling. MATLAB simulations of such bandpass digital communication systems and investigations of their characteristics and performance are provided here. These simulations confirm the theoretical expectation for Pb and are the starting point for the what-ifs of bandpass digital communication system design. Finally, the constellation plot depicts the demodulated in-phase and quadra ­ture signals of complex modulation schemes in the presence of AWGN. The opti ­mum decision regions are shown, and the observed BER performance of the bandpass digital communication system can be qualitatively assessed. Delta Modulation: Delta modulation is also abbreviated as DM or Ά-modulation. It is a technique of conversion from an analog-to-digital and digital-to-analog signal. If we want to transmit the voice we use this technique. In this technique we do not give that much of importance to the quality of the voice. DM is nothing but the simplest form of differential pulse-code modulation (DPCM). But there is some difference between these two techniques. In DPCM technique the successive samples are encoded into streams of n-bit data. But in delta modulation, the transmitted data is reduced to a 1-bit data stream. Main features: * The analog signal is similar as a series of segments. * To find the increase or decrease in relative amplitude, we should compare each and every segment of the approximated signal with the original analog wave. * By this comparison of original and approximated analog waves we can determine the successive bits for establishing. * only the change of information is sent, that is, only an increase or decrease of the signal amplitude from the previous sample is sent whereas a no-change condition causes the modulated signal to remain at the same 0 or 1 state of the previous sample. By using oversampling techniques in delta modulation we can get large high signal-to-noise ratio. That means the analog signal is sampled at multiple higher than the Nyquist rate. Principle In delta modulation, it quantizes the difference between the current and the previous step rather than the absolute value quantization of the input analog waveform, which is shown in fig 1. Fig. 1 Block diagram of a Ά-modulator/demodulator The quantizer of the delta modulator converts the difference between the input signal and the average of the previous steps. The quantizer is measured by a comparator with reference to 0 (in 2- level quantizer), and its output is either 1 or 0. 1 means input signal is positive and 0 means negative. It is also called as a bit-quantizer because it quantizes only one bit at a time. The output of the demodulator rises or falls because it is nothing but an Integrator circuit. If 1 received means the output raises and if 0 received means output falls. The integrator internally has a low-pass filter it self. Transfer Characteristics A signum function is followed by the delta modulator for the transfer characteristics. It quantizes only levels of two number and also for at a time only one-bit. Output signal power In delta modulation amplitude it is does not matter that there is no objection on the amplitude of the signal waveform, due to there is any fixed number of levels. In addition to, there is no limitation on the slope of the signal waveform in delta modulation. We can observe whether a slope is overload if so it can be avoided. However, in transmitted signal there is no limit to change. The signal waveform changes gradually. Bit-rate The interference is due to possibility of in either DM or PCM is due to limited bandwidth in communication channel. Because of the above reason DM and PCM operates at same bit-rate. Noise in Communication Systems Noise is probably the only topic in electronics and telecommunications with which every-one must be familiar, no matter what his or her specialization. Electrical disturbances interfere with signals, producing noise. It is ever present and limits the performance of most systems. Measuring it is very contentious almost everybody has a different method of quantifying noise and its effects. Noise may be defined, in electrical terms, as any unwanted introduction of energy tending to interfere with the proper reception and reproduction of transmitted signals. Many disturbances of an electrical nature produce noise in receivers, modifying the signal in an unwanted manner. In radio receivers, noise may produce hiss in the loudspeaker output. In television receivers snow, or confetti (colored snow) becomes superimposed on the picture. In pulse communications systems, noise may produce unwanted pulses or perhaps cancel out the wanted ones. It may cause serious mathematical errors. Noise can l imit the range of systems, for a given transmitted power. It affects the sensitivity of receivers, by placing a limit on the weakest signals that can be amplified. It may sometimes even force a reduction in the bandwidth of a system. Noise is unwanted electrical or electromagnetic energy that degrades the quality of signals and data. Noise occurs in digital and analog systems, and can affect files and communications of all types, including text, programs, images, audio, and telemetry. In a hard-wired circuit such as a telephone-line-based Internet hookup, external noise is picked up from appliances in the vicinity, from electrical transformers, from the atmosphere, and even from outer space. Normally this noise is of little or no consequence. However, during severe thunderstorms, or in locations were many electrical appliances are in use, external noise can affect communications. In an Internet hookup it slows down the data transfer rate, because the system must adjust its speed to match conditions on the line. In a voice telephone conversation, noise rarely sounds like anything other than a faint hissing or rushing. Noise is a more significant problem in wireless systems than in hard-wired systems. In general, noise originating from outside the system is inversely proportional to the frequency, and directly proportional to the wavelength. At a low frequency such as 300 kHz, atmospheric and electrical noise are much more severe than at a high frequency like 300 MHz. Noise generated inside wireless receivers, known as internal noise, is less dependent on frequency. Engineers are more concerned about internal noise at high frequencies than at low frequencies, because the less external noise there is, the more significant the internal noise becomes. Communications engineers are constantly striving to develop better ways to deal with noise. The traditional method has been to minimize the signal bandwidth to the greatest possible extent. The less spectrum space a signal occupies, the less noise is passed through the receiving circuitry. However, reducing the bandwidth limits the maximum speed of the data that can be delivered. Another, more recently developed scheme for minimizing the effects of noise is called digital signal processing (DSP). Using fiber optics, a technology far less susceptible to noise, is another approach. Sources of Noise As with all geophysical methods, a variety of noises can contaminate our seismic observations. Because we control the source of the seismic energy, we can control some types of noise. For example, if the noise is random in occurrence, such as some of the types of noise described below, we may be able to minimize its affect on our seismic observations by recording repeated sources all at the same location and averaging the result. Weve already seen the power of averaging in reducing noise in the other geophysical techniques we have looked at. Beware, however, that averaging only works if the noise is random. If it is systematic in some fashion, no amount of averaging will remove it. The noises that plague seismic observations can be lumped into three categories depending on their source.  · Uncontrolled Ground Motion This is the most obvious type of noise. Anything that causes the ground to move, other than your source, will generate noise. As you would expect, there could be a wid e variety of sources for this type of noise. These would include traffic traveling down a road, running engines and equipment, and people walking. Other sources that you might not consider include wind, aircraft, and thunder. Wind produces noise in a couple of ways but of concern here is its affect on vegetation. If you are surveying near trees, wind causes the branches of the trees to move, and this movement is transmitted through the trees and into the ground via the trees roots. Aircraft and thunder produce noise by the coupling of ground motion to the sound that we hear produced by each. Adaptive Delta Modulation (ADM) Another type of DM is Adaptive Delta Modulation (ADM). In which the step-size isnt fixed. The step-size becomes progressively larger when slope overload occurs. When quantization error is increasing with expensive the slope error is also reduced by ADM. By using a low pass filter this should be reduced. The basic delta modulator was studied in the experiment entitled Delta modulation. It is implemented by the arrangement shown in block diagram form in Figure Figure: Basic Delta Modulation A large step size was required when sampling those parts of the input waveform of steep slope. But a large step size worsened the granularity of the sampled signal when the waveform being sampled was changing slowly. A small step size is preferred in regions where the message has a small slope. This suggests the need for a controllable step size the control being sensitive to the slope of the sampled signal. This can be implemented by an arrangement such as is illustrated in Figure Fig: An Adaptive Delta Modulator The gain of the amplifier is adjusted in response to a control voltage from the SAMPLER, which signals the onset of slope overload. The step size is proportional to the amplifier gain. This was observed in an earlier experiment. Slope overload is indicated by a succession of output pulses of the same sign. The TIMS SAMPLER monitors the delta modulated signal, and signals when there is no change of polarity over 3 or more successive samples. The actual ADAPTIVE CONTROL signal is +2 volt under normal conditions, and rises to +4 volt when slope overload is detected. The gain of the amplifier, and hence the step size, is made proportional to this Control voltage. Provided the slope overload was only moderate the approximation will catch up with the wave being sampled. The gain will then return to normal until the sampler again falls behind. Comparison of PCM and DM When coming to comparison of Signal-to-noise ratio DM has larger value than signal-to-noise ratio of PCM. Also for an ADM signal-to-noise ratio when compared to Signal-to-noise ratio of companded PCM. Complex coders and decoders are required for powerful PCM. If to increase the resolution we require a large number of bits per sample. There are no memories in Standard PCM systems each sample value is separately encoded into a series of binary digits. An alternative, which overcomes some limitations of PCM, is to use past information in the encoding process. Delta modulation is the one way of doing to perform source coding. The signal is first quantized into discrete levels. For quantization process the step size between adjacent samples should be kept constant. From one level to an adjacent one the signal makes a transition of transmission. After the quantization operation is done, sending a zero for a negative transition and a one for a positive transition the signal transmission is achieved. We can observe from this point that the quantized signal must change at each sampling point. The transmitted bit train would be 111100010111110 for the above case. The demodulator for a delta-modulated signal is nothing but a staircase generator. To increments the staircase in positively a one should be received. For negative increments a zero should be receive. This is done by a low pass filter in general. The main thing for the delta modulation is to make the right choice of step size and sampling period. A term overloading is occurred when a signal changes randomly fast for the steps to follow. The step size and the sampling period are the important parameters. In modern consumer electronics short-range digital voice transmission is used. There are many products which uses digital techniques. Such as cordless telephones, wireless headsets (for mobile and landline telephones), baby monitors are few of the items. This digital techniques used Wirelessly communicate voice information. Due to inherent noise in wireless environments the Voice coding scheme chosen. For such an application the presence of robust bit errors must be. In the presence of bit errors Pulse Coded Modulation (PCM) and its derivatives are commonly used in wireless consumer products. This is due to their compromise between voice quality and implementation cost, but these are not robust schemes. Another important voice coding scheme is Adaptive Delta Modulation (ADM). It is a mature technique for consideration for these types of applications due to its robustness in bit error and its low implementation cost. To quantize the difference between the current sample and the predicted value of the next Sample ADM is used. It uses a variable called step height which is used to adjustment of the prediction value of the next sample. For the reproduction of both slowly and rapidly changing input signals faithfully. In ADM, the representation of each sample is one bit (i.e. 1 or 0). It does not require any data framing for one-bit-per-sample stream to minimizing the workload on the host microcontroller. In any digital wireless application there should be Bit errors. In ideal environment most of the voice coding techniques are provided which are good in quality of audio signals. The main thing is to provide good audio signals in everyday environment, there may be a presence of bit errors. For different voice coding methods and input signals the traditional performance metrics (e.g. SNR) does not measure accurately in audio quality. . Mean Opinion Score (MOS) testing is the main important parameter which overcomes the limitations of other metrics by successfully in audio quality. For audio quality the MOS testing is used. It is a scale of 1 to 5 which tells the audio quality status. In there 1 represents very less (bad) speech quality and 5 represents excellent speech quality. A toll quality speech has a MOS score of 4 or higher than it. The audio quality of a traditional telephone call has same MOS value as above. The below graphs shows the relationship between MOS scores and bit errors for three of the most common voice coding schemes. Those are CVSD, ÃŽÂ ¼-law PCM, and ADPCM. A continuously Variable Slope Delta (CVSD) coding is a member of the ADM family in voice coding schemes. The below graph shows the resulted audio quality (i.e. MOS score). All three schemes explain the number of bit errors. As the no of bit errors increases the graph indicates that ADM (CVSD) sounds better than the other schemes which are also increase. In an ADM design error detection and correction typically are not used because ADM provides poor performance in the presence of bit errors. This leads to reduction in host processor workload (allowing a low-cost processor to be used). The superior noise immunity significantly reduced for wireless applications in voice coding method. The ADM is supported strongly by workload for the host processor. The following example shows the benefits of ADM for wireless applications and is demonstrated. For a complete wireless voice product this low-power design is used which includes all of the building blocks, small form-factor, including the necessary items. ADM voice codec Microcontroller RF transceiver Power supply including rechargeable battery Microphone, speaker, amplifiers, etc. Schematics, board layout files, and microcontroller code written in C. Delta modulation (DM) may be viewed as a simplified form of DPCM in which a two level (1-bit) quantizer is used in conjunction with a fixed first-order predictor. The block diagram of a DM encoder-decoder is shown below.   The dm_demo shows the use of Delta Modulation to approximate input sine wave signal and a speech signal that were sampled at 2 KHz and 44 KHz, respectively. The source code file of the MATLAB code and the out put can be viewed using MATLAB. Notice that the approximated value follows the input value much closer when the sampling rate is higher. You may test this by changing sampling frequency, fs, value for sine wave in dm_demo file. Since DM (Delta Modulator) approximate a waveform Sa(t) by a linear staircase function, the waveform Sa(t) must change slowly relative to the sampling rate. This requirement implies that waveform Sa(t) must be oversampled, i.e., at least five times the Nyquist rate. Oversampling means that the signal is sampled faster than is necessary. In the case of Delta Modulation this means that the sampling rate will be much higher than the minimum rate of twice the bandwidth. Delta Modulation requires oversampling in order to obtain an accurate prediction of the next input. Since each encoded sample contains a relatively small amount of information Delta Modulation systems require higher sampling rates than PCM systems. At any given sampling rate, two types of distortion, as shown below limit the performance of the DM encoder.   Slope overload distortion: This type of distortion is due to the use of a step size delta that is too small to follow portions of the waveform that have a steep slope. It can be reduced by increasing the step size. Granular noise: This results from using a step size that is too large too large in parts of the waveform having a small slope. Granular noise can be reduced by decreasing the step size. Even for an optimized step size, the performance of the DM encoder may still be less satisfactory. An alternative solution is to employ a variable step size that adapts itself to the short-term characteristics of the source signal. That is the step size is increased when the waveform has a step slope and decreased when the waveform has a relatively small slope. This strategy is called adaptive DM (ADM). Block Diagram Adaptive Delta Modulation for Audio Signals: While transmitting speech for e.g. telephony the transfer rate should be kept as small as possible to save bandwidth because of economic reason. For this purpose Delta Modulation, adaptive Delta modulation, Differential Pulse-Code modulation is used to compress the data. In this different kind of Delta modulations and Differential Pulse Code modulations (DPCM) were realized to compress audio data. At first the principal of compressing audio data are explained, which the modulations based on. Mathematical equations (e.g. Auto Correlation) and algorithm (LD recursion) are used to develop solutions. Based on the mathematics and principals Simulink models were implemented for the Delta modulation, Adaptive Delta modulation as well as for the adaptive Differential Pulse Code modulation. The theories were verified by applying measured signals on these models. Signal-to-noise ratio Signal-to-noise ratio (often abbreviated SNR or S/N) is an electrical engineering measurement, also used in other fields (such as scientific measurement or biological cell signaling), defined as the ratio of a signal power to the noise power corrupting the signal. A ratio higher than 1:1 indicates more signal than noise. In less technical terms, signal-to-noise ratio compares the level of a desired signal (such as music) to the level of background noise. The higher the ratio, the less obtrusive the background noise is. In engineering, signal-to-noise ratio is a term for the power ratio between a signal (meaningful information) and the background noise: where P is average power. Both signal and noise power must be measured at the same and equivalent points in a system, and within the same system bandwidth. If the signal and the noise are measured across the same impedance, then the SNR can be obtained by calculating the square of the amplitude ratio: where A is root mean square (RMS) amplitude (for example, typically, RMS voltage). Because many signals have a very wide dynamic range, SNRs are usually expressed in terms of the logarithmic decibel scale. In decibels, the SNR is, by definition, 10 times the logarithm of the power ratio: Cutoff rate For any given system of coding and decoding, there exists what is known as a cutoff rate R0, typically corresponding to an Eb/N0 about 2 dB above the Shannon capacity limit. The cutoff rate used to be thought of as the limit on practical error correction codes without an unbounded increase in processing complexity, but has been rendered largely obsolete by the more recent discovery of turbo codes. Bit error rate In digital transmission, the bit error rate or bit error ratio (BER) is the number of received binary bits that have been altered due to noise and interference, divided by the total number of transferred bits during a studied time interval. BER is a unit less performance measure, often expressed as a percentage number. As an example, assume this transmitted bit sequence: 0 1 1 0 0 0 1 0 1 1, And the following received bit sequence: 0 0 1 0 1 0 1 0 0 1, The BER is in these case 3 incorrect bits (underlined) divided by 10 transferred bits, resulting in a BER of 0.3 or 30%. The bit error probability pe is the expectation value of the BER. The BER can be considered as an approximate estimate of the bit error probability. The approximation is accurate for a long studied time interval and a high number of bit errors. Factors affecting the BER In a communication system, the receiver side BER may be affected by transmission channel noise, interference, distortion, bit synchronization problems, attenuation, wireless multipath fading, etc. The BER may be improved by choosing a strong signal strength (unless this causes cross-talk and more bit errors), by choosing a slow and robust modulation scheme or line coding scheme, and by applying channel coding schemes such as redundant forward error correction codes. The transmission BER is the number of detected bits that are incorrect before error correction, divided by the total number of transferred bits (including redundant error codes). The information BER, approximately equal to the decoding error probability, is the number of decoded bits that remain incorrect after the error correction, divided by the total number of decoded bits (the useful information). Normally the transmission BER is larger than the information BER. The information BER is affected by the strength of the forward error correction code. CHAPTER II Pulse-code modulation: Pulse-code modulation (PCM) is a method used to digitally represent sampled analog signals, which was invented by Alec Reeves in 1937. It is the standard form for digital audio in computers and various Compact Disc and DVD formats, as well as other uses such as digital telephone systems. A PCM stream is a digital representation of an analog signal, in which the magnitude of the analogue signal is sampled regularly at uniform intervals, with each sample being quantized to the nearest value within a range of digital steps. PCM streams have two basic properties that determine their fidelity to the original analog signal: the sampling rate, which is the number of times per second that samples are taken; and the bit-depth, which determines the number of possible digital values that each sample can take. Digitization as part of the PCM process In conventional PCM, the analog signal may be processed (e.g. by amplitude compression) before being digitized. Once the signal is digitized, the PCM signal is usually subjected to further processing (e.g. digital data compression). PCM with linear quantization is known as Linear PCM (LPCM). Some forms of PCM combine signal processing with coding. Older versions of these systems applied the processing in the analog domain as part of the A/D process; newer implementations do so in the digital domain. These simple techniques have been largely rendered obsolete by modern transform-based audio compression techniques. * DPCM encodes the PCM values as differences between the current and the predicted value. An algorithm predicts the next sample based on the previous samples, and the encoder stores only the difference between this prediction and the actual value. If the prediction is reasonable, fewer bits can be used to represent the same information. For audio, this type of encoding reduces the number of bits required per sample by about 25% compared to PCM. * Adaptive DPCM (ADPCM) is a variant of DPCM that varies the size of the quantization step, to allow further reduction of the required bandwidth for a given signal-to-noise ratio. * Delta modulation is a form of DPCM which uses one bit per sample. In telephony, a standard audio signal for a single phone call is encoded as 8000 analog samples per second, of 8 bits each, giving a 64 kbit/s digital signal known as DS0. The default signal compression encoding on a DS0 is either ÃŽÂ ¼-law (mu-law) PCM (North America and Japan) or A-law PCM (Europe and most of the rest of the world). These are logarithmic compression systems where a 12 or 13-bit linear PCM sample number is mapped into an 8-bit value. This system is described by international standard G.711. An alternative proposal for a floating point representation, with 5-bit mantissa and 3-bit radix, was abandoned. Where circuit costs are high and loss of voice quality is acceptable, it sometimes makes sense to compress the voice signal even further. An ADPCM algorithm is used to map a series of 8-bit  µ-law or A-law PCM samples into a series of 4-bit ADPCM samples. In this way, the capacity of the line is doubled. The technique is detailed in the G.726 standard. Later it was found that even further compression was possible and additional standards were published. Pulse code modulation (PCM) data are transmitted as a serial bit stream of binary-coded time-division multiplexed words. When PCM is transmitted, pre modulation filtering shall be used to confine the radiated RF spectrum. These standards define pulse train structure and system design characteristics for the implementation of PCM telemetry formats. Class Distinctions and Bit-Oriented Characteristics The PCM formats are divided into two classes for reference. Serial bit stream characteristics are described below prior to frame and word orient

Friday, October 25, 2019

Capitalism and the Joy of Working :: essays research papers

  Ã‚  Ã‚  Ã‚  Ã‚  Enjoyment of work and creativity is more important to most people than higher pay. Employers cant pay to get more creativity because it is not just about the money. Something meaningful and challenging is generally more important for new workers coming into the workforce. No more is it the hope of reaching fame or making money that drives the workforce. It’s the opportunity to do the work that is enjoyed. Mihaly Csikszentmihalyi, a psychologist at the University of Chicago and author of Finding Flow: The Psychology of Engagement With Everyday Life, has found through his research that for some people, paying them to do things they enjoy actually reduces their interest in doing those things. Another theory is that if you take your hobby and turn it into a career you wont enjoy it as much.   Ã‚  Ã‚  Ã‚  Ã‚  Capitalism plays a key factor in creativity because the workforce needs to be stimulated in order to produce good results. â€Å"Cracking the whip† on an assembly line stifles creativity in the workplace and most workplaces are not assembly lines like they were a while back. Leaders that work under an authoritarian model stifle creativity and innovation. This will ultimately lead to low productivity and low turnover within the workforce. The â€Å"good life† just doesn’t happen anymore. There aren’t millions of people working in assembly lines and in automobile manufacturing plants .. people are creating their happiness and most of it is a direct result on how they spend their time while they are punched into a clock.   Ã‚  Ã‚  Ã‚  Ã‚  When what we do at work is meaningful people don’t get bored or distracted, they get so involved they forget to eat. The world, and capitalism, needs creativity and innovation and without it would breed a lull in change and technology. Obviously, change and technology are what drives our capitalistic society.   Ã‚  Ã‚  Ã‚  Ã‚  I remember my father always telling me that in order to appreciate and value the things you have you have to work for them yourself. I think the same holds true for business ventures. Having a personal interest and a personal bank account on line drives one to succeed possibly all the more than k working for a set paycheck.   Ã‚  Ã‚  Ã‚  Ã‚  Wealth and prosperity are created with capitalism. Freedom, self-interest and competition make for a healthy environment engulfed in capitalism. Freedom is the rights to exchange products and capital. Self-interest is the right to pursue ones own happiness (which after all is the American way) which transforms into pursuing ones own business and use it to appeal to the consumers.

Thursday, October 24, 2019

Kenneth lay guilty?

Kenneth Lay was an American entrepreneur who was widely notorious for his part in a corruption outrage that led to the breakdown of Enron Corporation. Kenneth Lay swore that he is innocent on all the things that were charged against him. He further illustrated that the one who should be blamed for the downfall of Enron is no other than Andrew Fastow as well as the Wall Street Journal. Lay claimed that he was nothing but an innocent one who got hoodwinked in the process.Lay stated that all his problems occurred when he treaded into the position after Skilling left. He further argued that he has got nothing to do with all of the predicaments they faced. A report from NPR stated that all of the allegations in the case which were charged on Lay mirror the deeds he took after he took the position after Skilling’s departure. However, this report is a little hypocritical because there are also other cases which were presented in court wherein the allegations concern troubles from bef ore Lay took up the position.Anyone who had been watching the story could fathom that Lay was really guilty of all the charges against him on the deceit that Enron has committed on the American community. His tales which shows Enron as a fancy company save for the pranks of Fastow and the dire admonitions Wall street Journal is a nothing but bunch of nonsense. As CFO, Fastow definitely did not construct the said frauds that had the Enron dealers merrily chatting about tearing off Grandma Millie. Those kinds of corruption was definitely structured and supported by people from the top.Lay may go on announcing until the end of his life that he is not guilty of the dealings of his underlings, but still he was accountable for those people he puts in custody and it was definitely his task to examine the allegations of corruption once it was brought under his nose. As could be seen Lay is guilty with all the charges against him, he did society wrong by stealing so much from the American pu blic and he has the nerve to deny and to blame his underlings for all the things he did. With that, I make firm with my stand that Lay is guilty with all the charges against him.References:BARRIONUEVO, A. (2006). The Guidelines Now Tougher, Skilling to Face Sentence Today. The New York Times.Kays, K. (2004). Conspiracy With Merrill Lynch Charged in Enron Trial. Washington PostLozano, J. A. (2006). Burden of Enron’s collapse now lies on Skilling. The Topeka Capital-Journal.Associative Press. (2006). Enron founder Ken Lay dies of heart disease.Richardson, B. (2006). Kenneth Lay: A fallen hero: BBC News.Sperry, P. (2002). Clinton ‘sweetheart’ deal speed up Enron’s collapse. WorldNetDaily.Steffy, L. (2006). Good ‘ol Grandma Millie may yet get the last laugh. Houston Chronicle.

Wednesday, October 23, 2019

Racial and Criminal Profiling: a Deductive Argument Essay

Erin Callihan, AIUSA, states that â€Å"Increased national security should not equate to decreased civil liberties. All people are entitled to due process and other basic human rights and constitutional protections† (Amnesty International). Racial Profiling, according to Amnesty International, occurs when race is used by law enforcement or private security officials, to any degree, as a basis for criminal suspicion in non-suspect specific investigations. The Constitution, which is arguably the most important document of the United States, clearly states that every person has the rights to life, liberty and the pursuit of happiness. This document sets the American people apart from many other countries in that it is supposed to give us equal rights. An issue that has risen in the United States time and time again and has threatened this equality is that of race and racism. Now in law enforcement from the levels of your local police department to that of prestigious FBI units there is the specialization in profiling, racial profiling to be more exact. Racial profiling has not only proven to be largely unsuccessful, but it is violating our equal rights ending up in over representation in America’s prisons and discrimination in the real world. Race is a socially constructed form of categorization that has often been misunderstood, leading to different forms of racism. It is a set of shared interests, characteristics, and culture. Race is an illusion that has been created to construct identity. Identity is not totally decided by you, but chosen for you by what people have decided about you. The way that people see other people and things as right or wrong depends on the culture you, the individual, is living in. This then makes identity as something that is mostly cultural. Race is like a stereotype, or over generalization, that is making prejudices that lead to racism. A prejudice is any preconceived opinion without correct or adequate information. Through something that is socially constructed through culture, like race, Race is difficult to measure and apply to people because it is self identified. According to Ailya Saperstein and Andrew M. Penner in their article â€Å"The Race of a Criminal Record: How Incarceration Colors Racial Perceptions†, â€Å"Most research on race in the United States treats race as an intrinsic characteristic of individuals, a fixed group membership ascribed at birth and based on one’s ancestry† (93). This is difficult to put into use in the real world because if you have one idea of what each race is you will find that people are different depending on where you are, the time period you are there, the amount of interaction with other cultures, and the history in that land among many other variables. An example of this would be how I was considered to be really Mexican at UCSB, yet I am considered â€Å"White washed† by my family, and I consider myself to be a combination of both as well as Colombian. As having been grown up first generation American it is very hard on me to have been Latina. When I studied abroad last year in Argentina I was not considered Latina at all, but White. The Argentine had a different perception of race and insisted that it didn’t matter where your parents were from, it only mattered where you were born. The majority of the population does not fit into only that one mold most researchers have put them in. Race is affected by the population in power and as such can be seen as a form to keep the status quo. The minorities in a society are often the ones that have a negative reputation and have to deal with the social construct others have made about them. Examples of minorities would be Blacks, Latinos and Muslims. The three races have faced a lot of scrutiny here in the United States. They have been accused of being a large part of the crime population, being uneducated, and being terrorists. Although most are not this is the stereotype they have to live with every day. When you are part of the majority you get to make up your own identity, which usually ends up being positive. When you are part of the majority, in the case of the United States this would White, it usually means you a doing well or better than others socially. Other things associated with Whites would be a higher education and the suburbs. As the dominant culture all the laws that are created have had them in mind. Racism is institutional prejudice and as such it is hidden. Therefore in order to be racist I would argue you need to be part of the dominant culture. There is a misrepresentation in the incarceration is an example of racial profiling as being unconstitutional. The majority of the population of incarcerated rates is made up of Blacks and Latinos. Can it be that they are truly a crime committing race and since Whites are educated they perform less than half the crime? The answer to this is no. African Americans have long been subjugated to felons since the history of the United States began. They were seen as lowly and uneducated and convicted of crimes they did not commit. Unable to fight back due to the fact that no one would listen or care even if they knew they were wrong they had to endure punishment. It is a fact that if you are part of the dominant culture the punishment will be less severe. The thing about the Rodney King incident that enraged people was not whether he was guilty or not it was the manner in which he was prosecuted. He was beaten severely unfairly without being able to have a trial to see if he was guilty. In the eye of the law you are â€Å"innocent until proven guilty†, and Mr. King was never given a fighting chance. Another example of discrimination through racism would be the immigration law in Arizona that â€Å"requires police officers, â€Å"when practicable,† to detain people they reasonably suspect are in the country without authorization and to verify their status with federal officials† according to Randall C. Archibold of the NY Times (par. 22). How is a person reasonably suspect of being an illegal? This is done those physical features. The fourteenth amendment provides protection against unreasonable searches based on race. Is this law not an example of that? Saperstein and Penner argue that racial profiling, through incarceration rates, affects the individuals, families and communities (93-94). If we start from the top we see that Latinos and Blacks do not constitute even half of our government making it misrepresentative of our population. One way racial profiling affects the individual is by making it harder for them to obtain a job, let alone a well paying job. Sometimes the individual has to work at a young age to help their parents with rent and other necessities. This is why we see and therefore associate Latinos and Blacks in low income neighborhoods. Once you are part of the minority and have been incarcerated the odds of you succeeding in life get significantly slimmer. According to Saperstein and Penner if you have been incarcerated for something narcotics related then you are disqualified for a lot of the aid the government offers. In your FAFSA application you are asked if you have been convicted of any drug related felony. If you press yes then you are not eligible for financial aid. Since most of these families cannot afford to send their children off to college that option completely diminishes. As a result you have communities with low income, who are most educated to the high school level, if that with high unemployment. Let’s put aside the fact that racial profiling goes against the constitution and look to see if it actually works. According to sources the FBI’s use of criminal profiling has a low success rate. Their success rate can be equaled by that of psychics some would argue. Captain Ron Davis of the Oakland Police Department said it best in September 9, 2003 to NOBLE when he stated that â€Å"Racial profiling . . . is one of the most ineffective strategies, and I call it nothing less than lazy, sloppy police work. It’s basically saying you don’t want to learn about your community, you don’t want to learn about people’s behavior, you don’t want to do your job, and don’t want to investigate, you just want to stop a lot of people and see if you can come up with some statistical number at the end of the evening. . . .†. (Amnesty International) There has been criticism on the process because essentially what you are is forgetting about the hard evidence and guessing up a picture of what the perpetrator looks like. Profilers have forgotten was fieldwork is and have become armchair professionals that don’t need to go to the crime scene to get insight. In Macolm Gladwell’s What the Dog Saw he describes the job of a profiler as relying on typology to paint a picture of the killer. Most of reasoning behind this technique is that of homology, the relationship between the culprit and the action. Gladwell noticed that there were two categories of killers, organized and unorganized. The organized chose their victim carefully and went through great measures to not be caught. The disorganized killer chose their victim randomly with usually high stakes of being caught. Gladwell finds out that people don’t fall strictly into one category therefore crimes don’t fall into one category. You can have the same crime done for different motives. By relying on connections they are making up based on theories they have made up that have made this guessing game that Gladwell calls a â€Å"party trick† (354). The moral of his story being in a way like Einstein’s in that if you get enough wrongs you eventually get a right. However, there is too much a stake, one of these being people’s lives, to play a guessing game at that level. Racial profiling and Criminal profiling are unconstitutional and frankly a waste of time. Racial profiling opens the door and accepts discrimination to uphold the status quo. Criminal profiling is a waste of time, tax dollar money and obscured by racial profiling. Let’s stop with these erroneous short cuts and actually take the time to evaluate what racial profiling actually does to others. Works Cited Amnesty International | Working to Protect Human Rights. Amnesty International USA, 2011. Web. 20 Mar. 2011. Archibold, Randall C. â€Å"Arizona Enacts Stringent Law on Immigration.† New York Times 23 Apr. 2010, New York ed., A1 sec. New York TImes, 23 Apr. 2010. Web. 21 Mar. 2011. Gladwell, Malcolm. â€Å"Dangerous Minds.† What the Dog Saw: and Other Adventure Stories. Camberwell, Vic.: Allen Lane, 2009. 336-56. Print. Saperstein, Aliva, and Andrew M. Penner. â€Å"The Race of a Criminal Record: How Incarceration Colors Racial Perceptions.† Social Problems 57.1 (2010): 92-113. JSTOR. Web. 20 Mar. 2011.