Wednesday, October 30, 2019

The key to understanding common law system is their adversarial nature Essay

The key to understanding common law system is their adversarial nature - Essay Example Today, the common law is said to be a mixture, not only of court judgments, but also of statutes and equity and still retaining its distinguishing characteristic of being unwritten, as opposed to statutory law, although many leading and precedent cases have seen printing in law reports and journals. 1 The common law system, is however best understood by studying the components of its adversarial nature. Anglo-Saxon kings like Ine in 689-725 and Alfred the Great (875-900), caused the issuance of codes and laws during their reigns that were largely reflections of ancient customs in addition to some new innovations. The primitive practice, for example, of private vengeance in blood-feuds although not outlawed, but there were subtle moves to restrain them imposing upon a tariff called wergild set by the king, where a man’s value, determined by his social standing, had a corresponding price to be paid when he is wronged. 2 The common law countries, like the Great Britain, the United States and Australia, employ the adversarial mode of trial whilst Continental Europe observes the non-adversarial or inquisitorial judicial system. The distinction between the two is that â€Å"the adversarial mode of proceeding takes it shape from a contest or a dispute: it unfolds as an engagement of two adversaries before a relatively passive decision maker whose principal duty is to reach a verdict. The non-adversarial mode is structured as an official inquiry. Under the first system, the two adversaries take charge of most procedural action; under the second, officials perform most activities.† 3 Moreover, adversarial systems are characterised by the following: the parties to the action control its flow or conduct; the trial consists of a continuous hearing and is the center of the judicial system; the production of evidence falls in the hands of the contending parties; the rules of court has no compulsory role. This is

Monday, October 28, 2019

P300-based Brain-Computer Interface Performance Enhancement

P300-based Brain-Computer Interface Performance Enhancement Enhancing Performance and Bitrates in a P300-based Brain-Computer Interface for Disabled Subjects by Phase-to-Amplitude Cross-Frequency Coupling Stavros I.Dimitriadis1,2*, Avraam D. Marimpis3 1Institute of Psychological Medicine and Clinical Neurosciences, Cardiff University School of Medicine, Cardiff, UK 2Cardiff University Brain Research Imaging Center, School of Psychology, Cardiff University, Cardiff, UK 3Brain Innovation B.V., Netherlands Abstract A brain-computer interface (BCI) is a communication system that transforms brain-activity into specific commands for managing a computer or other home or electrical devices. In other words, a BCI is an alternative way of interacting with the environment by using brain-activity instead of muscles and nerves. For that reason, BCI systems are of high clinical value for targeted populations suffering from neurological disorders. In this paper, we present a new processing approach of a well-known P300-BCI system for disabled subjects. Estimating cross-frequency coupling (CFC) and namely ÃŽÂ ´-ÃŽÂ ¸ phase-to-amplitude coupling (PAC) within sensor, we succeeded high classification accuracy and high bitrates for both disabled and able-bodied subjects. The system is tested with four severely disabled and four able-bodied subjects. The bitrates obtained for both the disabled and able-bodied subjects reached the fastest reported level of 10 bits/sec. The new preprocessing approach based o n recordings from the single-sensor Pz while the classification accuracy is tested also for others electrodes. Keywords: Brain-computer interface; P300; Disabled subjects; cross-frequency coupling; accuracy *Corresponding author: Dr.Dimitriadis Stavros Research Fellow School of Medicine, Cardiff University, UK CUBRIC Neuroimaging Center, Cardiff,UK Introduction From the very first work of Farwell and Donchin [1] the majority of P300-based Brain Computer Interface (BCI) systems focused on developing new application scenarios [2,3], and on developing and testing new algorithms for the reliable detection of the P300 waveform from noisy datasets [4-8]. For a review of P300, an interested can read the [9-10]. Ten years ago, two pioneer studies have been first published presenting a P300 BCI system on disabled subjects. Piccione et al. (2006) [11] design a 2D cursor BCI control system where subjects had to concentrate on four arrows flashing every 2.5 sec in random order occupied the peripheral area of a computer screen. Five disabled and seven able-bodied subjects were participated on this experiment. For controlling this cursor, this four-choice P300 flashing arrow cursor was used. EEG signals were recorded using four EEG sensors and electro-oculogram. Using independent component analysis and neural networks, Piccione et al. [11] demonstrated that P300 can be a valuable control signal for disabled subjects. However, the communication system was too low compared to state of the art systems [5,8]. Sellers and Donchin (2006) [12] designed a four choice BCI experiment with four stimuli (YES, NO, PASS, END) that were presented every 1.4 s in random order, in two different modalities either visual or auditory or in a combined mode. Three subjects suffering with ALS and three able-bodied subjects performed the experiment. EEG recordings from three sensors were classified using a stepwise linear discriminant algorithm (LDA). They demonstrated that communication via a P300 system is possible for subjects suffering from ALS. Additionally, they demonstrated that communication is possible in different modalities like the visual, auditory, and also in a combined auditory-visual mode. However, both the classification accuracy and the communication rate were low compared to the state of the art results. One of possible explanations of low accuracy and communication rate could be the low number of EEG sensors, the long inter-stimulus intervals and the low number of trials. McCane et al., demonstrated a BCI system where both accuracy and communication rate did not differ significantly between ALS users and HVs. Although ERP morphology was similar for the two groups, the target ERPs differed significantly in the location and amplitude of the late positivity (P300), the amplitude of the early negativity (N200), and the latency of the late negativity (LN) [13]. Hoffmann et al.,, demonstrated a six-choice P300 paradigm which was tested in a population of five disabled and four able-bodied subjects. Six different images were flashed in random order with an ISI of 400 ms [7]. They tested how the electrode configuration can influence the accuracy in order to detect the best channel selection. For four out of five disabled subjects and for all the able-bodied subjects both the communication rates and the classification accuracies were higher compared to the aforementioned studies [11,12]. The datasets in Hoffmann et al., study can be freely downloaded from the website of the EPFL BCI group (http://bci.epfl.ch/p300). In the present study, we used the dataset from Hoffmann et al., study to demonstrate an alternative algorithmic approach with main scope to improve the bitrates up to the limits. For that occasion, we adopted a cross-frequency coupling (CFC) estimator namely phase-to-amplitude coupling (PAC) to quantify how the phase of the lower frequency brain rhythms modulates the amplitude of the higher oscillations. The whole approach was followed in a trial basis and within sensors located over parieto-occipital brain areas. PAC proved to be a valuable estimator in many applications like the design of a biomarker: for amnestic mild cognitive impairment subjects during an auditory oddball paradigm [14], for dyslexia [15], for mild traumatic brain injury [16]. The layout of the paper is as follows. In Section 2, we described the subject population, the experiments that were performed, and the methods used for data pre-processing steps of the proposed pipeline and the classification procedure. Results are presented in Section 3. Discussion is devoted in Section 4. 2. Materials and Methods 2.1. Experimental setup Users were facing a laptop screen on which six images were displayed (see Fig. 1). The images showed a television, a telephone, a lamp, a door, a window and a radio. The images were selected according to an application scenario in which users can control electrical appliances via a BCI system. The application scenario served however only as an example and was not pursued in further detail. The images were flashed in random sequences, one image at a time. Each flash of an image lasted for 100 ms and during the following 300 ms none of the images was flashed, i.e. the inter-stimulus-interval was 400 ms. The EEG was recorded at 2048 Hz sampling rate from 32 electrodes placed at the standard positions of the 10-20 international system. A Biosemi Active Two amplifier was used for amplification and analog to digital conversion of the EEG signals. [Figure 1 around here] 2.2. Subjects The proposed methodology was applied to P300 BCI-oriented recordings derived from five disabled and four healthy subjects. The demographics of the four disabled are presented in (Table 1). Disabled subject 5 as excluded from further analysis. Subjects 6-9 were Ph.D. students that were recruited from EPFL BCI groups laboratory (all males, age 30 ±2.3). None of subjects 6-9 had known neurological deficits. For more information regarding the subjects, an interested reader should refer to the original paper [7]. Table 1. Subjects from which data was recorded in the study of the environment control system S1 S2 S3 S4 Diagnosis Cerebral palsy Multiple sclerosis Late-stage amyotrophic lateral sclerosis Traumatic brain and spinal-cord injury, C4 level Age 56 51 47 33 Age at illness onset 0 (perinatal) 37 39 27 sSex M M M F Speech production Mild dysarthria Mild dysarthria Severe dysarthria Mild dysarthria Limb muscle control Weak Weak Very Weak Weak Respiration control Normal Normal Weak Normal Voluntary eye movement Normal Mild nystagmus Normal Normal 2.3. Experimental schedule Each subject completed four recording sessions. The first two sessions were performed on one day and the last two sessions on another day. All subjects were recruited within maximum two weeks between the first and the last session. Each of the sessions consisted of six runs, one run for each of the six images. For further details about the protocol followed on this experiment see the original paper related to this dataset [7]. The following protocol was used in each of the runs. (i) Subjects were asked to count silently how often a prescribed image was flashed (for example: Now please count how often the image with the television is flashed). (ii) The six images were displayed on the screen and a warning tone was issued. (iii) Four seconds after the warning tone, a random sequence of flashes was started and the EEG was recorded. The sequence of flashes was block-randomized, this means that after six flashes each image was flashed once, after twelve flashes each image was flashed twice, etc. The number of blocks was chosen randomly between 20 and 25. On average 22.5 blocks of six flashes were displayed in one run, i.e. one run consisted on average of 22.5 target (P300) trials and 22.5ÃÆ'-5 = 112.5 non-target (non-P300) trials. (iv) In the second, third, and fourth session the target image was inferred from the EEG with a simple classifier. At the end of each run the image inferred by the classification algorithm was flashed five times to give feedback to the user. (v) After each run subjects were asked what their counting result was. This was done in order to monitor performance of the subjects. The duration of one run was approximately one minute and the duration of one session including setup of electrodes and short breaks between runs was approximately 30 min. One session comprised on average 810 trials, and the whole data for one subject consisted on average of 3240 trials. 2.4 Offline Analysis The impact of different single-sensor recordings on classification accuracy was tested in an offline procedure. For each subject four-fold cross-validation was used to estimate average classification accuracy. More specifically, the data from three recording sessions were used to train a classifier and the data from the left-out session was used for validation. This procedure was repeated four times so each session served once for validation. 2.4.1. Preprocessing Before learning a classification function and before validation, several preprocessing operations were applied to the data. The preprocessing operations were applied in the order stated below. Referencing. The average signal from the two mastoid electrodes was used for referencing. Filtering. A third order forward-backward Butterworth bandpass filter was used to filter the data.. The MATLAB function butter was used to compute the filter coefficients and the function filtfilt was used for filtering. The predefined frequencies were : ÃŽÂ ´ {0.5-4 Hz},ÃŽÂ ¸ {4-8 Hz},ÃŽÂ ±1 {8-10 Hz}, ÃŽÂ ±2 {10-13 Hz},ÃŽÂ ²1 {13-20 Hz},ÃŽÂ ²2 {20-30 Hz} and ÃŽÂ ³1 {30 45 Hz}. (iii) Downsampling. The EEG was down-sampled from 2048 Hz to 512 Hz by selecting each 4th sample from the bandpass-filtered data. (iv) Single trial extraction. Single trials of duration 1000 ms were extracted from the data. Single trials started at stimulus onset, i.e. at the beginning of the intensification of an image, and ended 1000 ms after stimulus onset. Due to the ISI of 400 ms, the last 600 ms of each trial were overlapping with the first 600 ms of the following trial. (v) Electrode selection. We applied our analysis to recordings from single-sensor activity and mainly, PZ,OZ,P3,P4,P7 and P8. (vi) Feature vector construction. As appropriate feature for each trial, we used the phase-to-amplitude coupling (PAC) which already has been shown its potentiality in building reliable biomarkers (Dimitriadis et al., 2015,2016). PAC was estimated for each frequency pair (see ii)). The description of PAC is given in the next section. As a complementary feature that can separate the counted stimuli from the non counted stimuli, alpha relative signal powers have been estimated. Alpha power level can give us a valuable and objective criterion when a subject attends or not attends to the stimulus. Our idea is to create an initial binary classifier that will cut-off the attended from the non-attended stimuli for each subject prior entering the main multi-class classifier. CFC metric computation CFC estimates the strength of pairwise interactions and identifies the prominent interacting pair of frequencies, both between and within sensors [17-19]. Among available CFC descriptors, phase-amplitude coupling (PAC), which relies on phase coherence, is the one most commonly encountered in research [20]. The PAC algorithm as adapted to continuous MEG multichannel recordings is described below. ÃŽÂ ¤he within-sensor CFC version is described. Let x(isensor, t), be the EEG activity recorder at the isensor-th site, and t=1, 2,. T the successive time points. Given a frequency-limited signals x(isensor,t) , cross-frequency coupling is estimated by allowing the phase of the lower frequency (LF) oscillations to modulate the amplitude of the higher frequency (HF) oscillations. The complex analytic representations of each signal zLF(t) and zHF(t) are derived via the Hilbert transform (HT[.]). Next, the envelope of the higher-frequency oscillations AHF(t) is bandpass-filtered within the range of LF oscillations and the resulting signal is submitted to an additional Hilbert transform to derive its phase dynamics component à Ã¢â‚¬  '(t) which expresses the modulation of the amplitude of HF-oscillations by the phase of the LF-oscillations. Phase consistency between the two time-series was measured by means of both the original definition [21] and the imaginary portion of PLV, as synchronization indexes to quantify the strength of PAC. The original PLV is defined as follows: and the imaginary part of PLV as follows: The imaginary portion of PLV is considered to be less susceptible to volume conduction effects in assessing CFC interactions. While the imaginary part of PLV is not affected by volume conduction effects, it could be sensitive to changes in the angle between two signals, which not necessarily imply a PLV change. In general, the imaginary portion of PLV is only sensitive to non-zero-phase lags and is thus resistant to instantaneous self-interactions associated with volume conductance [22]. For further details and applications, an interested reader can read our previous work [14,15]. In the present study, as was already mentioned we used 8 frequency bands which means that PAC is estimated for 7*6/2=21 cross-frequency pairs e.g. ÃŽÂ ´Ãƒ Ã¢â‚¬   ÃŽÂ ¸A ,ÃŽÂ ´Ãƒ Ã¢â‚¬   ÃŽÂ ±1Awhere à Ã¢â‚¬   and A denote the phase and amplitude of each frequency band. Figure 2 demonstrates the pre-processing steps of the PAC estimator for a trial of subject 6 at target image 6. [Figure 2 around here] Signal Power We estimated the relative power of each band-pass frequency signal segment with the following equations: The first equation quantifies the signal power (SP) of each frequency as the sum of the filtered signal squared per sample (3) while equation (4) divides the SP by the sum of the SP from all the frequencies which gives the relative signal power (RSP). The whole approach was repeated for every trial, sessions and subject. 2.4.4. Machine learning and classification Training data sets contained 405 target trials and 2025 non-target trials and validation data sets consisted of 135 target and 675 non-target trials (these are average values cf. Section 2.3). Adopting sequential feature selection algorithm, we detected the characteristic cross-frequency pair via PAC value that gives the highest discrimination of each target images compared to the rest based on the training data set. Additionally, we used the same feature extraction algorithm to detect the relative signal power that separate the counted flashing images from the non-counted images. We trained a multi-class SVM classifier based on the selected PAC estimate from specific cross-frequency pairs and then we tested the classifier to the validation data to get the response tailored to each target image [23]. The training test consisted of the first session while the rest three sessions were used for validating the whole analytic scheme. A k-nearest neighbour (k-NN) classifier was applied to differentiate the attended from the non-attended flashing images prior to multi-class SVM classifier. 2.4.5 Performance Evaluation Classification accuracy and ITR were calculated for the offline experiments separately. The method for calculating ITR (in bits per second) was as follows (5): Where N is the number of classes (i.e., 6 in this study), P is the accuracy of target identification, and T (seconds per selection) is the average time for a selection. Results ÃŽÂ ´-ÃŽÂ ¸ Phase-to-Amplitude Coupling as a Valuable Feature for the BCI-P300 System We estimated both PAC and relative signal power (RSP) for the first 32 samples (60 ms) increasing the window up to 500 ms (256 samples) with a step of 12 samples (5 ms) . The sequential feature selection algorithm detected only one PAC feature from the 21 possible cross-frequency pairs as the unique candidate feature to separate the six classes of images-stimuli. ÃŽÂ ´Ãƒ Ã¢â‚¬   ÃŽÂ ¸A was the selected feature for both disabled and able-bodied subjects. The group-averaged classification performance was for each sensor location using the first 100 ms for both able-bodied and disabled subjects. The errors were detected on the trials where the subject missed the flashing image. The classification performance with the use of a kNN-classifier prior to the multi-class SVM was 100 % for every subject and for all the pre-selected sensors namely PZ,OZ,P3,P4,P7,P8 EEG sensors. Figure 3 and Figure 4 illustrates the trial-related (grand-averaged) PAC-connectivity patterns (comodulograms) for subject 6 (able-bodied) and subject 1 (disabled) correspondingly from target and non-target trials for each flashing image. Comodulograms differed by contrasting target vs non-target within each subject and target image but also between the two images. ÃŽÂ ´Ãƒ Ã¢â‚¬   ÃŽÂ ¸A was the unique feature for both disabled and able-bodied subjects that can clearly predict the target image for both groups. [Figures 3 and 4 around here] Attention and Alpha Power Prior to multi-class SVM, we applied a kNN-classifier based on ÃŽÂ ±1 signal power which was selected as the feature that can discriminate counted from non-counted flashing images. The kNN-classifier performed 100 % clear filtration of attended from non-attended trials for each subject and further improved the performance of multi-class SVM to 100 %. We achieved this performance using ÃŽÂ ±1 signal relative power estimated from the first 100 ms for both able-bodied and disabled subjects. The classification performance with the kNN-classifier was 100 % for every subject and for all the pre-selected sensors namely PZ,OZ,P3,P4,P7,P8 EEG sensors. Table 2 summarizes the group-averaged relative signal power (RSP) of ÃŽÂ ±1 frequency band for attended versus non-attended images. Table 2. Group-averaged ÃŽÂ ±1 signal relative power for attended and non-attended images. Attended Non-Attended Able Bodied Disabled Performance Evaluation In the present study, we succeeded bitrates of 10 bits/sec for both disabled and able-bodied subjects correspondingly for all the sensor locations used on the whole analysis. The time for estimation of PAC and testing the trial was 0.00001 sec on a Windows 7 -Intel 7 8-core machine. Discussion A novel approach of how to analyse single-trials in a BCI system was introduced based on the estimation of cross-frequency coupling (CFC) and namely phase-to-amplitude coupling (PAC). PAC was estimated within EEG sensors from single-trials recorded during a visual evoked experimental paradigm. The proposed analytic scheme based on the extraction of unique features from the CFC patterns on a single-trial basis and namely the ÃŽÂ ´Ãƒ Ã¢â‚¬   ÃŽÂ ¸A coupling, as a unique feature for both able-bodied and disabled subjects. Our experimentations showed a high classification rate (99.7%) based on the proposed PAC feature. Additionally, the superiority of our approach compared to alternative popular methodologies like the use of the original recordings was evident from the succeeded bitrates (10 bits/sec) and also of the response time of the classification system (0.00001 sec). Complementary, using a binary classifier trained with ÃŽÂ ±1 relative signal power prior to the multi-cl ass SVM, we differentiated the attended from the non-attended stimuli which further improved the classification performance up to 100% in both groups. Compared to many other P300-based BCI systems designed for disabled users, we succeeded the highest classification accuracy and bitrates higher than the original presented dataset [7]. In previous studies like the one of Sellers and Donchin (2006) [12], the best classification accuracy for the able-bodied and ALS subjects was on average 85% and 72% correspondingly [12]. Hoffmann et al., succeeded absolute classification accuracy for both disabled and able bodied subjects for the first demonstration of the current dataset. However, he used longer time series of over 15-20 secs by concatenating trials in order to train better the classifier. Additionally, he used one classifier per image per each of the twenty block and the final outcome derived as the majority voting of the twenty classifiers. Here, using phase-to-amplitude coupling as the appropriate descriptor of the evoked response in the parietal brain areas and a multi-class SVM classifier, we performed almost absolute accuracy ( 99.97) on a trial basis. Using an additional binary k-NN classifier and ÃŽÂ ±1 relative signal power prior to multi-class SVM, we separated the attended (counted) from the non-attended (not counted) trials leading to zeroing the misclassified trials from the multi-class SVM classifier for every subject. This procedure further improved the classification performance from 99.97 to 100% for each subject. We succeeded bitrates faster (10 bits/sec) than any other BCI system including the fastest spelling system presented recently (5.32 bits/sec ; [24]). In a previous study of Piccione et al. (2006) [11] average bitrates of about 8 bits/min were reported for both disabled and able-bodied subjects. Hoffman et al., 2008 [7] reported average bitrates obtained with electrode configuration (II) (8 electrodes) 12.5 bits/min for the disabled subjects and 10 bits/min for the able-bodied subjects. According to Klimeschs ÃŽÂ ± theory, on the early stages of perception, ÃŽÂ ± directs the information flow towards to neural substrates that represent information relevant for the encoding system (e.g. visual stimulus to visual system, voice/sound to auditory system). The physiological main function of ÃŽÂ ± is linked to inhibition. Klimeschs ÃŽÂ ± theory hypothesizes that ÃŽÂ ± enables to have access to stored information by inhibiting task-irrelevant neuronal substrates and by timing/synchronizing the cortical activity in task-relevant neuronal systems. A lot of research findings showed that both evoked ÃŽÂ ± and phase locking are evidence of a successful encoding of global stimulus features in an early post-stimulus interval of about 0-150à ¢Ã¢â€š ¬Ã¢â‚¬ °ms [25]. Besides the cross low-frequency/high-frequency coupling (e.g.,ÃŽÂ ¸-ÃŽÂ ³; 26,27), there are many evidences [28-31] that CFC exists also between the low-frequency bands (e.g., delta-theta, delta-alpha, and theta-alpha). Lakatos et al. (2005) [29] introduced a hypothesis about the hierarchical organization of EEG oscillations suggesting that the amplitude of the brain oscillations at a characteristic frequency band can be modulated by the oscillatory phase at lower frequency. In particular, they found that ÃŽÂ ´ (1-4 Hz) phase modulates ÃŽÂ ¸ (4-10 Hz) amplitude, and ÃŽÂ ¸ modulates ÃŽÂ ³ (30-50 Hz) amplitude in primary auditory cortex of awake macaque monkeys [29]. This multiplex coupling or nesting of brain rhythms might reflect a general brain organizational principle, as evidence of coupling (mainly ÃŽÂ ¸-ÃŽÂ ³) has also been observed in animals (e.g. rats,cats) and humans [32]. For instance, in auditory cortex, ÃŽÂ ´-band modulates the amplitude of ÃŽÂ ¸-band ICMs, whose phase in turn modulates the amplitude of ÃŽÂ ³-band ICMs [33]. This indirect enhancement effect uses the ongoing activity of local neural activity in the primary auditory cortex. Their hypothesis supports the notion that neural oscillations reflect rhythmic shifting of excitability states of neural substrates between high and low levels. This hypothesis is supported by the fact that oscillations can be predicted by visual input such as the auditory input arrives during a high excitability phase and is amplified. In the present study, we demonstrated that ÃŽÂ ´ (0.5-4 Hz) phase modulates ÃŽÂ ¸ (4-8 Hz) amplitude over visual brain areas due to flashing images, their content and mainly was observed on parietal EEG recording sites. We should also mention that the reason why ÃŽÂ ´Ãƒ Ã¢â‚¬   ÃŽÂ ¸A coupling discriminates the six flashing images can be directly linked to the content of the images. Visual attention samples image stimuli rhythmically demonstrating a peak of phase at 2 Hz [34] while flashing images induced rhythmic fluctuation at higher frequencies (6-10 Hz) [35] here within ÃŽÂ ¸ frequency range [4-8 Hz].Finally, the work of Karakas et al., ([36]) showed that the ERP represents interplay between the oscillations that are mainly in the ÃŽÂ ´ and ÃŽÂ ¸ frequencies and directly linked to P300 [37]. Conclussion In this work, an efficient algorithmic approach was presented to a P300-based BCI system for disabled subjects. We have shown that absolute classification accuracies and the highest reported bitrates can be obtained for severely disabled subjects under the notion of cross-frequency coupling and namely phase-to-amplitude coupling. Specifically, ÃŽÂ ´ (0.5-4 Hz) phase modulates ÃŽÂ ¸ (4-8 Hz) amplitude proved to be the candidate feature from PAC estimates that supported the highest classification accuracy, the fast bitrates and the fast response time of the multi-class system. Due to the use of the P300, only a small amount of training (trials from 1st session as a training set and 100ms per trial) was required to achieve good classification accuracy. Future improvements to the work presented could be the design of useful BCI applications adapted to the needs of disabled users. Also it might be useful to perform exploratory analysis on larger populations and on real-time to further validate the results found in the present work. Acknowledgements SID was supported by MRC grant MR/K004360/1 (Behavioural and Neurophysiological Effects of Schizophrenia Risk Genes: A Multi-locus, Pathway Based Approach) References Farwell LA, Donchin E. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalogr Clin Neurophysiol 1988;70:510-23. Polikoff J, Bunnell H, Borkowski W. Toward a P300-based computer interface. In: Proceedings of the RESNA95 Annual Conference; 1995. Bayliss JD. Use of the evoked P3 component for control in a virtual apartment. IEEE Trans Neural Syst Rehab Eng 2003;11(2):113-6. Xu N, Gao X, Hong B, Miao X, Gao S, Yang F. BCI competition 2003 Data Set IIb: Enhancing P300 wave detection using ICA-based subspace projections for BCI applications. IEEE Trans B

Friday, October 25, 2019

Interpretation of Poetic Sound :: essays research papers fc

Understanding the Speaker’s Voice: Through Interpretation of Poetic Sound   Ã‚  Ã‚  Ã‚  Ã‚  Classical, Early European, Eastern and Modern poetry share structural similarities in their use of rhythm, meter and rhyme; however, sound plays a more subtle role for purposes of interpretation. Poets combine structured rhythmic patterns and the formal arrangement of words with devices such as alliteration to create images in the reader’s mind. Two contrasting poems written by William Blake titled â€Å"The Lamb† from Songs of Innocence (1789) and â€Å"The Tyger† from Songs of Experience (1794), effectively illustrate how the fundamental use of poetic structure, selective alliteration and imagery, accentuates the underlying sounds of a poem; thereby, enabling the reader to better understand the voice or tone being portrayed by the speaker.   Ã‚  Ã‚  Ã‚  Ã‚  In Blake’s opening lines of â€Å"The Lamb,† the speaker sets the initial tone for the conversation that takes place between the child and the gentle creature; â€Å"Little Lamb, who made thee/Dost thou know who made thee† (Blake 1-2). As evidenced by the speaker’s selective use of diction, the soft and non-threatening nature of the words establishes an atmosphere of child-like innocence and wonder that echoes throughout the remainder of the work. As the conversation progresses, the setting is established through the use of the words â€Å"stream† and â€Å"mead† (Blake 4), which is intended to suggest that the conversation is taking place outside, in a peaceful meadow. In subsequent lines of the poem, the child poses a series of softly worded phrases such as â€Å"Gave thee clothing of delight/Softest clothing wooly bright† (Blake 5-6). Although not initially obvious to the reader, through the selective use of alli teration, the speaker has effectively introduced the characteristics and subtle rhythmic sound that is consistent with that of a childhood nursery rhyme. The speaker’s melodious combination of repetition, diction and rhyme is further reinforced in the final two lines of the last stanza, â€Å"Little Lamb God bless thee/Little Lamb God bless thee† (Blake 19-20), which symbolically culminates in the child’s belief that the miracle of creation resides in God himself.   Ã‚  Ã‚  Ã‚  Ã‚  There is a stark contrast between the opening lines of â€Å"The Lamb† and the opening lines of Blake’s companion poem â€Å"The Tyger.† In â€Å"The Tyger,† the speaker immediately establishes a very different setting for the conversation that takes place between the child and the fearsome beast; â€Å"Tyger! Tyger! Burning bright/In the forests of the night† (Blake 1-2). Unlike the peaceful setting of â€Å"The Lamb,† the image created in the reader’s mind through the selective use of words like â€Å"burning,† â€Å"forests,† and â€Å"night,† suggests that the conversation is taking place in an environment of uncertainty and darkness.

Thursday, October 24, 2019

Greek Food

Greek Food Greek Cuisine is certainly one of the most wanted flavors in the entire world, but what is it about Greek food that makes it so exceptional? Well there are five features of Greek food that make it stand out. The first is, the basic ingredients in Greek dishes are usually nourishing. You'll find a lot of vegetables added to the mix, with fish, legumes and cereals being some of the other main ingredients in traditional recipes (S. Linda, 2012). Second, the food has a whole lot more flavor to it, simply all because the locals use a lot of herbs and spices, including dill, garlic, oregano, onion, bay laurel leaves and mint.A few other choices consist of thyme, basil (S. Linda, 2012). Thirdly, Greek food is unique because the recipes are actually quite easy to make (S. Linda, 2012). The fourth distinguishing characteristic of Greek cuisine is that there is no beef. Lamb is the staple meat for most Greek dishes. This is because of the fact that the terrain and the climate have m ade the breeding of sheep and goat better than cattle (S. Linda, 2012). Greek dishes usually come with a few mezedes, or appetizers. Each region has their specialty, which makes the food rather varied so that you don't easily grow weary.A lot of these appetizers are packed with so much flavor, and are the perfect balance of tradition, health and tastiness (Yao, B. H, 2012). Greek food has a few main components. Those components are cheese, fruit and vegetables, olives and olive oil, seafood and poultry, meat, and herbs and seasonings. The most common Greek cheese is feta. Other cheeses include Kasseri, a hard yellow cheese, Kefalotyri, a very salty cheese often served with pasta, Manouri, a soft white cheese often eaten on its own as an appetizer, and Mizithra, a soft, unsalted cheese used in pies and pastries (Binder, L).Greek cuisine follows the seasonal fruits and vegetables of the region (Johns, S). The warm climate of Greece makes it ideal for growing vegetables and fruits, and these are eaten in abundance. A multitude of colorful and flavorful vegetables form a fundamental part of Greek cuisine. These include tomatoes, garlic, onions, spinach, artichokes, fennel, lettuce, cabbage, horta or wild greens, zucchini, eggplant and peppers. Fruits are eaten either fresh or preserved by drying. Popular varieties include apricots, grapes, dates, cherries, apples, pears, plums nd figs (Greek Cuisine). Olive oil and olives are a major part of Greek food. It is the most common ingredient in Greek cuisine. The oil is used in most forms of cooking as well as in salad dressings and for dipping sauces (Johns, S). As well as being used for their oil, olives are also eaten whole. The most frequently eaten type is the plump kalamata olive which is added to stews and salads or eaten as part of a meze or appetizer dish (Greek Cuisine). Greece is almost surrounded by sea, so it is not surprising that fish and shellfish are eaten regularly.The most popular types of fish and sh ellfish include tuna, mullet, bass, halibut, swordfish, anchovies, sardines, shrimp, octopus, squid and mussels. This fish and seafood is enjoyed in many ways. It can be grilled and seasoned with garlic and lemon juice, baked with yogurt and herbs; cooked in rich tomato sauce, added to soups; or served cold as a side dish. Chicken is also eaten regularly, as are game birds such as quail and Guinea fowl (Greek Cuisine). Meat doesn't play a prominent role in traditional Greek cuisine. It's usually reserved for festivals and special occasions or used in small amounts as a flavor enhancer.When meat is eaten it's most often sheep or goat, but these animals aren't just used for their meat. Sheep and goats also provide a valuable source of nourishment from their milk (Greek Cuisine). Many of Greece's most famous dishes involve some sort of meat. Gyros, which have become a fast American favorite in the past few years, are made with meat, usually lamb, roasted on a spit, served with sauce an d veggies on folded pita bread. Lamb and potatoes is another extremely common Greek dish, as is souvlaki, which comprises anything made and served on a skewer.Chicken, pork and lamb souvlaki are the most common types (Binder, L). The spices and herbs in Greek dishes are garlic, basil and bay leaf. Mint, oregano and parsley are also often used in traditional Greek dishes (Johns, S). Greek desserts and beverages are also as unique as the culture. Dessert may be the most famous of Greece's culinary contributions, and Baklava, in particular, may be the most well-known. This phyllo-dough pastry is filled with nuts and covered in sweet syrup, and has become an American favorite.Other Greek desserts include Loukoumi, a starch and sugar treat, Koulourakia, butter cookies and plain yogurt, flavored with honey or syrup (Binder, L). However, Greek dessert is often fresh or dried fruit are the usual dessert. The rich desserts and pastries are mostly reserved for special occasions or eaten in sm all amounts (Greek Cuisine). Wine is consumed regularly in Greece, but mainly with food, and in moderation. Ouzo, an aniseed flavored spirit, and beer is also popular alcoholic beverages. Strong black coffee is one of the most popular non-alcoholic beverages (Greek Cuisine).

Wednesday, October 23, 2019

Desert Places

Desert Places by Robert Frost Snow falling and night falling fast, oh, fast In a field I looked into going past, And the ground almost covered smooth in snow, But a few weeds and stubble showing last. The woods around it have it – it is theirs. All animals are smothered in their lairs. I am too absent-spirited to count; The loneliness includes me unawares. And lonely as it is, that loneliness Will be more lonely ere it will be less – A blanker whiteness of benighted snow With no expression, nothing to express.They cannot scare me with their empty spaces Between stars where no human race is. I have it in me so much nearer home To scare myself with my own desert places In the poem â€Å"Desert Places† by Robert Frost, The speaker is a lonely man who is not feeling a sense of belonging within himself. Also winter does not offer to help the lonely man. Instead it assists his feelings of loneliness. â€Å"And the ground almost covered smooth in snow† (line 3). As line three indicates, the speaker is watching an empty field being covered by more and more snow.This connotes concealing the beauty of the field. The snow imagery communicates the feelings of disappointing winter and emptiness. The observation of loneliness in winter and isolation from the world is nothing compare to the feelings of loneliness and emptiness within. This meaning is effectively communicated by the poem’s imagery and by the denotation and connotation of the words Frost has chosen. In the first stanza, the setting is developed with the use of words ‘night’ and ‘snow’ and they both carry negative connotation.Snow is employed throughout the poem to show the lack of identity; it also has characteristics of cold and formless white sheet. This observations show an image of snow falling fast, destroying the beauty of the field and covering up everything that is living. Similarly the ‘night’ has a negative connotation of darkne ss, the blackness and visionless that signals the depression and loneliness that the speaker is feeling. The concept of ‘falling fast’ both words which are mentioned twice in the first line of the first stanza, suggests descending uncontrollable and unstoppable.All four words create images that describe the mood of the speaker’s inescapable depression as result of the ‘ground covered smooth in the snow’ (3) and the feeling of emptiness within. In the second stanza the word ‘theirs’ denotes belonging; explaining the woods have something to feel a part of. The speaker still feels lonely. Also the word ‘smothered’ denotes suffocation and blockage. Although the animals are ‘smothered’ by the snow and feel helpless and alone, they are smothered in ‘their lairs’.The last line of the second stanza is really important because the word ‘loneliness’ is mentioned for the first time in the poem. Th e world ‘loneliness’ denotes without company and isolated. In line seven, the speaker is ‘too absent-spirited to count,’ he is sadly alone. In the eighth line ‘the loneliness includes me unaware,’ the speaker notices unexpectedly he too is included in the ‘loneliness. ’ It is not just the animals and the empty field covered with snow the speaker is blaming of being lonely but also himself as well.The speaker loses enthusiasm. In the third stanza, It is the most straightforward and haunting stanza of the poem because it practically induces ‘loneliness’ into the reader. ‘Lonely’ and ‘loneliness’ are mentioned three times in this stanza. ‘Will be more lonely ere it will be less—’ (10) The speaker admits that the weather and more so him feeling lonely will only get worse before it gets better. The word ‘blanker’ and ‘benighted’ are used in this stanza to give imagery of how empty and lonesome the persona is feeling.In line twelve, the imagery of depression and absence of identity is furthermore supported when the speaker compares himself to the snow to say ‘With no expression, nothing to express’ (12) mentioning his lack of identity and him falling into loneliness. The fourth and last stanza is where the speaker is most confident. The word ‘scare’ is mentioned twice in this stanza and it denotes fear. In the first line of the fourth stanza the speaker says he worries no more of empty and lonely spaces. The word ‘star’ denotes space, but it also connotes to an example of loneliness ‘where no human race is. (14) The speaker does not coward anymore of lonely empty spaces, he does not need empty fields covered with formless snow and space filled with loneliness to scare him; it’s already inside of him. The last line of the poem ‘To scare myself with my own desert places,â€⠄¢ (16) contain an image which displays Frost’s thought that fear comes from within oneself rather than without. No matter how you view or understand this poem ‘Desert places’ by Robert Frost; we can all agree that imagery, connotation, and denotation play an important role in explaining the poem’s total meaning.

Tuesday, October 22, 2019

How to Stay Positive in the Middle of a Job Search

How to Stay Positive in the Middle of a Job Search The job search process may seem lengthy at times, but there are things you can do to increase both the efficiency of the search and your resiliency.   The most successful job hunters  navigate the waters in a purposeful manner,  using positivity as a guide. Let’s take a look at some techniques that can help you achieve your goals.   Target a JobThis is really a two-pronged approach. First,  understand  the type of job that suits your talents. You likely have an idea of what this is from  the courses you’ve taken to the natural interests and abilities you possess. List the type of work at which you excel, and link that to pertinent job duties. Try to  find companies where you might want to work, and then aim  to connect your talents to job duties at these  companies.  Network With OthersNetworking with friends or colleagues who work in the industry or at a company you are targeting is important. Work with your contacts   to learn about  attribut es the company seeks  in an employee- doing so might help you use the right key words  resume or during an interview.  Get Help to Stay on TargetSeeking the help of a friend or job coach can be key in keeping you on track. If a contact  has experienced a similar job search situation or counseled others in their search, he or she can be a weekly touchstone to help  you stay on target.A trusted support person is also a great resource for practicing a mock interview. Brainstorming possible questions- and having the right answers!- can help  you appear at ease when the big day comes.  Strategize Your Job SearchThinking of a job search with the same plan of attack  you have when playing a game of chess will  help you navigate hurdles. Game players know there are no points to be made in  giving up, and having a plan always helps you win. Devising a strategy for your job search is similar. Make checklists for your week and celebrate milestones (such as finally structuri ng a winning  resume)- and always keep moving forward.  Control What You CanWhile there are some aspects of a job search you cannot control, you can control  how you  search for a job and where you eventually apply. For instance, using TheJobNetwork to locate jobs that match your skill set at a particular company is a deliberate choice that sets you on the right path. Applying as soon as a job comes up gives you a running start. TheJobNetwork matches jobs to your criteria and qualifications and alerts you as soon as the job is available.  Set Up a RoutineEstablish a routine you feel comfortable following every day. Check email daily, maybe even at the same time very day, to see if search results have arrived or if you received an invitation for an interview. Automatically follow up with an email after sending in an application to ensure it was received.When you are searching for a job, having a platform like the JobNetwork doing the searching for you helps. Since employers use platforms to announce jobs, it stands to reason you’ll have early access to new listings. TheJobNetwork even ranks the results, so you are able to see how closely the job matches your criteria. Doing your homework and getting ready to embrace a new job has a great deal to do with how you feel and the perceptions others have of you. Being positive and energetic is a great calling card.

Monday, October 21, 2019

Meiosis Comparison essays

Meiosis Comparison essays The two different types of meiosis, meiosis I and meiosis II, undergo the same interphase. In this phase, the chromosomes replicate like they do in the S-phase preceding mitosis. In prophase I, the homogonous chromosomes, or two sister chromosomes, come together and condense in a process called synapsis. The four chromatids of the pair of homogonous chromosomes visible under a microscope is called a tetrad. Some of these chromosomes crisscross at the chiasmata, a site where genetic material is exchanged, which help hold the chromosomes together. The genetic traits of the pair of chromosomes are then mixed homogeneously in a process called crossing over. Meanwhile, spindle fibers made of microtubules form as the centrosomes begin to separate to opposite poles of the cell. In Metaphase I, the chromosomes, now connected to kinetochore microtubules, line up in the metphase plate. The spindle fibers pull the chromosomes apart toward opposite ends in Anaphase I, but unlike Anaphase in mitosis, the chromosomes retain their centromeres. In telophaseI and cytokinesis, the chromosomes are completely relocated and at opposite ends of the cell. Each pole has a haploid chromosome, or a cell with a single set of chromosomes. Cytokinesis usually occurs simultaneously with telophase I. A pinch forms outside the cell, forming a cleavage furrow, and ultimately splitting the cell into two. In some cells, however, the chromosomes recondense in Interphase II before entering meiosis II. In meiosis II, each of the daughter cells of meiosis I undergo their own meiosis. In prophase II, the spindle apparatus forms. In metaphase II, the chromosomes line up just like they do in other variants of the metaphase. They separate and move toward opposite ends of the cell in anaphase II. In telophase II and cytokinesis, the cells divide by way of a cleavage furrow and create four daughter cells. Meiosis in animals occurs only in the ovaries and te...

Sunday, October 20, 2019

Heart Rot Tree Diseaseâ€Prevention and Control

Heart Rot Tree Disease- Prevention and Control In trees, heart rot is caused by a fungal disease that causes the center of the trunk and branches to decay. The most obvious symptom is the presence of mushrooms or fungal growths, called conks, on the surface of the trunk or limbs. Most hardwood species can be afflicted with heart rot, and it can be a major problem for the logging and lumber industry since the center heartwood is the most valuable wood in a hardwood tree.   Causes of Heart Rot in Trees Heart rot in living trees can be caused by many different fungal agents and pathogens that can enter the tree through open wounds and exposed inner bark wood to infiltrate the center core of the tree- the heartwood. Heartwood  makes up most of a trees inner wood and support structure, so over time, this rot can cause the tree to fail and collapse. Heartwood cells have some resistance to decay but depend  on a barrier of protection from the bark and outside living tissue. Heart rot can occur in many hardwoods and other deciduous species but is especially common in oaks infected with the  I. dryophilus  and  P. everhartii decay fungi.  All deciduous trees can get heart rot,  while resinous conifers have some extra resistance. More on Heartwood It should be noted that heartwood is genetically programmed to spontaneously separate from living wood tissues that surround it.  Once heartwood formation has begun to lay down annual layers and increase in volume, the heartwood quickly becomes the largest part of the trees structure by volume.  When that living barrier of protection surrounding the heartwood fails, the resulting disease in the heartwood causes it to soften. It quickly becomes structurally weaker and prone to breakage. A mature tree that has a large volume of heartwood is more at risk than a young tree, simply because its heartwood constitutes more of its structure.   Symptoms of Heart Rot Usually, a conk or mushrooming fruiting body on the surface of the tree is the first sign at the site of infection.  A useful rule of thumb suggests that a cubic foot of inner heartwood wood has decayed for each conk produced- there is a lot of bad wood behind that mushroom, in other words. Fortunately, though, heart rot fungi do not invade living wood of healthy trees. Other than the resulting structural weakness heart rot creates, a tree can otherwise look quite healthy even though it is riddled with heart rot.   Economic Costs Heart rot is a major factor influencing the  economics of logging high-value lumber, although it is a natural  consequence in many older forests. The heartwood of tree is where the valuable lumber exists, and a badly rotten tree is of no value to the timber industry. A hardwood tree that lives long enough will likely deal with heart rot at some point, since it is a natural part of the trees life cycle, especially in native forests. A very old tree will almost certainly suffer storm damage at some point that will allow fungi to enter and begin the process of heart rot. In some cases, entire forests may be at risk if, for example, a catastrophic storm has caused major damage at some time in the past. The fungi spread very slowly within a tree, so it may be many years after the initial fungal infection that serious weakness becomes evident.   Heart rot is prevalent throughout the world, and it affects all  hardwood  trees. It can be very hard to prevent and control, although a tree that is carefully monitored over its entire lifetime may avoid it.   Prevention and Control of Heart Rot As long as a tree is growing vigorously, rot will be confined to a small central core within the tree. This behavior is called tree wood compartmentalization. But if the tree is weakened and fresh wood exposed by severe pruning or storm damage, decay fungi can advance into more and more of the trees heartwood. There is no economically feasible fungicide to use on a tree that hosts the heart rot fungi. The best way to prevent heart rot in your hardwood tree is to keep it healthy using proper management techniques: Minimize pruning wounds that expose large areas of wood.Shape trees at an early age so major branch removal will not be necessary later.Remove broken branch stubs following storm damage.Have trees you suspect of heart rot checked by an arborist to determine if sufficient live wood is present for structural safety.Check trees every few years to be certain new growth is maintaining a  sound structure. Large trunks and main branches with extensive decay may have little sound wood to support the tree.

Saturday, October 19, 2019

The Medias Role in Terrorism Research Paper Example | Topics and Well Written Essays - 1000 words

The Medias Role in Terrorism - Research Paper Example      World renown terrorists such as Osama Bin Laden and Ayman Al-Zawahiri have been known to be particularly obsessed with the media (Transnational Terrorism, Security and the Rule of Law, 2008). According to Hoffman (2006), terrorists’ obsession with the media predicated on the belief that fear is only generated when the media publicizes the terrorist attacks. Without the media coverage, the terrorist attack can only spread limited fear (Hoffman, 2006). Terrorists typically attempt to generate public resentment of government oppression and fear from the government and the public that the terrorist group is powerful. The media is an important vehicle for delivering this type of fear (Walsh, 2010). Media coverage of a terrorist attack can overplay the damages which could lead to government action that represses human rights and potentially result in public disapproval of government responses. Similarly, media coverage of terrorist attacks, often with graphic images and pro longed coverage can overplay the damages and thus invoke fear of a powerful and dangerous terrorist faction. According to Bockstette (2008), the Jihadist terrorist groups maintain a strategic communication system which is propagated via the media. The communication goals are three-fold. The first of the communication goals is to spread information about Islam to Muslims with a view to establishing and propagating a fixed idea about what it means to be a Muslim. The second communication goal is also directed toward Muslims and those who might question acts of violence on religious grounds.

Friday, October 18, 2019

Atmospheric pollution and its affect on human health Essay

Atmospheric pollution and its affect on human health - Essay Example Accidental air comes from leakage and blasts in industrial furnaces, as well as through ample consumption of fuel alternatives, and smoking. On the other hand, industrial air pollution characterizes a type that pollutes the environment via the emissions caused by thermal plant operations, wide use of construction materials such as cement and steel, fertilizers, pesticides, atomic units, and industrial wastes. Green house effect derived from the contamination of several important gases and fossil fuel combustion in the air makes another foul contribution and this type is especially characteristic of green house gases namely carbon dioxide, methane, water vapor, nitrous oxide, and ozone which return to the lower atmospheric region after evolution against gravity. Transport related air pollution similarly originates from smoke brought about by petrol or diesel burnt in different vehicular engines which correspondingly emit noxious gases in mild to poisonous concentrations at worst. How does each of these types of air pollution affect human health and the environment? Smog is proven to have caused serious respiratory diseases as in the 1952 incident in London that resulted in the death of 4,000 people. Greenhouse gases equivalently pose threat on crops and livestock besides exhibiting potential harm on human skin which may be impacted by corrosion or cancer upon based on critical gas levels. As heat exceedingly builds up due to greenhouse effect, this further leads to climate change and global warming. By transport related air pollution, carbon monoxide for instance can drive oxygen out of the bloodstream, causing apathy, fatigue, headache, disorientation, and decreased muscular coordination and visual acuity. Industrial plants capable of releasing untreated wastes along with high levels of sulfuric or nitric acids make possible the precipitation of acid rain that gradually erodes building structures, contaminates vegetation, drinking water, and even the aquatic ha bitat. Birth defects, genetic mutations, and damage to neurological systems may also follow as consequences of long-term exposure to toxic materials with high percentages in air. What are some ways to control air pollution? Since human activities comprise either the primary or secondary sources of air pollution, control over these activities hence becomes essential in suppressing pollutants from building up in hazardous degrees of contact with the atmosphere. Car pool is one such means to arrive at this goal which takes to effect lower consumption of fossil fuels once fewer cars are used upon implementation. This way, fossil fuels are sustained and conserved for later applications. In the similar manner, taking advantage of the public transport may help regulate emissions properly besides being an act of support to augment public income. One may opt to walk or simply ride a bicycle to cover short distances as much as possible so that certain quantities of gas are saved while there w ould be a number of places which can be freed eventually of smoke and of the uncomfortably warmer temperatures due to sensible heat if most people heartily participate in this endeavor of minimizing the use of major emission source. Likewise, there is quite an immense worth in utilizing alternative sources of energy aside from the traditional fuels. If there emerges

Branding for the UK Youth Market Research Proposal

Branding for the UK Youth Market - Research Proposal Example A degree of understanding with regard to their broad interests in relation to how they spend their money will also need to be appreciated. In particular whether there is a holistic group product awareness and market for brands like Nike or Sprite etc. Within this research framework questions relating to how the prevention of product consumerism like tobacco and alcohol work effectively or whether campaigns for drink and smoking awareness prove ineffective for most young people. This should conclude whether current marketing is acting responsibly, towards the needs of young people as well as what the real motivators and trends of the average young person are where branding is concerned. This will provide a comprehensive definition of what is meant by branding and outline the general position of the current youth market in comparison to fifty years ago. This section will provide an overview and a context for the chapters ahead. The literature review will demonstrate the types of sources that were utilised for the purposes of researching and demonstrating the findings presented within the dissertation. This will include a comprehensive analysis of all the key references that have been used to argue the points under discussion within this thesis. As a means of setting the context withi... issertation will use a variety of resources including text books, research papers, journals, relevant articles and web resources in order to support the arguments for discussion. Total length for the Literature Review should represent 25% of the dissertation As a means of setting the context within which a changing market has evolved and is still developing Bill Osgerby's innovative Youth Media. This text explores Youth culture and the media, the 'Fab Phenomenon', representations, responses and effects of the media on young people. It also focuses on lifestyle, culture and identity. The Journal of Consumer Behaviour offers a number of useful and relevant volumes that provide primary research findings, including Uncovering the links between brand choice and personal values among young British and Spanish girls by Anne Dibley and Susan Baker. Their paper presents empirical research relating to specific areas of branding, including how snack brands can satisfy particular young female values amongst 11-12 year-old British and Spanish consumers. Links between brand choice and personal values amongst the young are analysed and proved legitimate; particularly in relation to associations with fun, excitement and friendship. The Art of Digital Branding By Ian Cocoran, is a very up to date text discussing the art of digital branding for the benefit of contemporary audiences. It looks at how different colour schemes, site maps and menu formats can work effectively at engaging with different people to satisfy different needs and comments on the challenges of the changing morals of youth. Similarly Matt Haig looks at the phenomena of modern methods of marketing to the young in Mobile Marketing: The Message Revolution Which essentially discusses the powerful and direct method

Global Warming Regulations Essay Example | Topics and Well Written Essays - 4500 words

Global Warming Regulations - Essay Example Industrial revolution has given birth to different human activities that are involved in burning of fossil fuels like coal, oil and natural gas for the purpose of obtaining energy. Carbon is an important ingredient of the fuels that are burnt for getting energy (Richard C. Rockwell, 1998). This burning is the primary source of emission of different green gases like carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), and ozone (O3). The emission of these different gases is closely related with the pollution released in to the atmosphere. In this process some other gases like Carbon monoxide; nitrogen oxides and carbon dioxide are also produced as result of incomplete combustion of hydrocarbons in fuels of automobiles (Sandra L. Justus, 1998). This type of auto emission generates carbon monoxide which is a big challenge for the global warming (Steven J. Moss and Richard J. McCann, 1993). At present motor vehicles produce about 60 percent of the nationwide total carbon monoxide (E NS News, 2003) and in some cities it is as high as 95 percent. This colorless, odorless but deadly harmful gas is also produced by the industrial processes, non transportation fuel use and wildfires (Sandra L. Justus, 1998). In order to curb the global warming, the US government has attempt... These regulations are directly affecting the functions and business of Emission Trading markets. The current discussion about the government regulations on carbon monoxide emission will focus on the negatives effects of carbon monoxide that are main cause of these regulations and the impact of these regulations on the Emission trading markets. Moreover we will trace the negative and positive effects of previous regulations so that we can have a clear picture in front of us that can help in predicting the effects and outcomes of the current regulations. Reasons behind the Regulations US Government has enacted the regulation on the emission of carbon monoxide. This decision is effected by several factors. Among them the poisonous effects of carbon monoxide on human health are very important. The other reasons include the safety from air pollution, political pressure etc. Carbon monoxide and Human Health The negative impact of carbon monoxide on the public health is an important reason behind that. Carbon monoxide is the most toxic substance that people come in contact in their daily lives. When carbon monoxide is produced in large amount it remains present around the people at different places. It has some negative impacts on the human health and it can affect the people at their work places, homes, garage, cars, caravan and boat (IAPA, 2006). When people inhale carbon monoxide, it passes through the respiratory system and goes to the lungs from where it passes directly in to the bloodstreams through the air sacs. Carbon monoxide affects the functions of blood and stop oxygen to reach the body tissues and insufficient

Thursday, October 17, 2019

Literature review on ethical issue between employees and their Essay

Literature review on ethical issue between employees and their manegars - Essay Example The daily interaction, collaboration and interpersonal relationships required from managers and employees create tensions and issues that are considered normally pervading the working environment. Apart from operational concerns, managers and employees are faced with contrasting beliefs, values and preferences that occur because of the diversity in personalities, traits, cultural orientations and demographical factors that form each individual in the organization. These beliefs fall under ethical issues in business, defined as â€Å"â€Å"the principles and standards that guide behavior in the world of business† (Ferrell, Fraedrich & Ferrell, 2009, 6). In this regard, the current study aims to proffer a review of related literature on the subject of ethical issues between managers and employees. The theoretical framework and impetus for the review came as a result of an interview with a legal researcher for the Saudi – Central Bank, who identified problems which are et hical in nature and existed in their organization, currently affecting job satisfaction and productivity of employees. Ethical Issues in Business Organizations Ethical behavior have been identified to manifest actions that are â€Å"morally accepted as "good" and "right" as opposed to "bad" or "wrong" in a particular setting† (Sims, 1992, 506). ... According to Martires and Fule (2004), the culture of an organization influences the ethical climate that pervades. Organizational culture is a set of symbols, myths, ceremonies that reflect the underlying values and beliefs of the organization or its work force. This statement is supported by Hunt (1991) and Schneider and Rentsch (1991) who emphasized that there are factors that influence diversity in ethical climates of organizations, to wit: â€Å"personal self-interest, company profit, operating efficiency, individual friendships, team interests, social responsibility, personal morality, rules and standard procedures, and laws and professional codes† (cited in Sims, 1992, 510). As such, more detailed ethical issues facing human resources in organizations, particularly between managers and employees, are revealed by CiteHR (n.d.) to wit: (1) â€Å"discrimination issues include discrimination on the bases of age (ageism), gender, race, religion, disabilities, weight and att ractiveness; (2) issues surrounding the representation of employees and the democratization of the workplace: union busting, strike breaking; (3) issues affecting the privacy of the employee: workplace surveillance, drug testing; (4) issues affecting the privacy of the employer: whistle-blowing; (5) issues relating to the fairness of the employment contract and the balance of power between employer and employee: slavery, indentured servitude, employment law; and (6) occupational safety and health† (CiteHR, n.d., par. 1). In the case of the legal researcher for the Saudi – Central Bank, the ethical issue that existed between managers and employees was manifested in the way the manager discriminated against underperforming employees which further

Independent study Essay Example | Topics and Well Written Essays - 3000 words

Independent study - Essay Example he individual reaches the medical care system, the incidence of sudden death and deaths that occur before these individuals could receive medical supervision constitutes the major challenge to the present system of cardiovascular care. (Harken, 2004; Wenger, 2004) In the United Kingdom alone, roughly 20 million local citizens survive from heart attacks and stroke each year who requires a continuous clinical care. (WHO, 2007) Considering that those individuals who have had heart attacks and strokes are at high risk of repeated attacks including death, it is essential for health care and clinical nurses particularly those who are working in a cardiology ward to learn more about the importance of proper administration of oxygen therapy immediately after a myocarial infarction attack. Aiming to enable the readers understand more about the topic, the researcher will discuss about the rationale for choosing the topic particularly the relevance of administering oxygen therapy with myocardial infarction as well as with working in a cardiology ward. Eventually, a literature review will be conducted focusing on the general information about myocardial infarction including the major causes of myocardial infarction; the negative impact of having myocardial infarction; the importance of early intervention on myocardial infarction attack; as well as the impact of oxygen therapy administration to patients with myocardial infarction. Based on the gathered literature study, the strengths and limitations of the current practice including some recommendations for the practice development will be thoroughly discussed. According to Dr. Richard Lippman, a renowned researcher, â€Å"oxygen deprivation is the major cause of heart attacks among 1.5 million people each year.† (OxyGenesis Institute, 2007) Oxygen, one of the most important elements and nutrients of all life, is delivered to the human cells by the blood. Considering that the coronary arteries or blood vessels of individuals

Wednesday, October 16, 2019

Literature review on ethical issue between employees and their Essay

Literature review on ethical issue between employees and their manegars - Essay Example The daily interaction, collaboration and interpersonal relationships required from managers and employees create tensions and issues that are considered normally pervading the working environment. Apart from operational concerns, managers and employees are faced with contrasting beliefs, values and preferences that occur because of the diversity in personalities, traits, cultural orientations and demographical factors that form each individual in the organization. These beliefs fall under ethical issues in business, defined as â€Å"â€Å"the principles and standards that guide behavior in the world of business† (Ferrell, Fraedrich & Ferrell, 2009, 6). In this regard, the current study aims to proffer a review of related literature on the subject of ethical issues between managers and employees. The theoretical framework and impetus for the review came as a result of an interview with a legal researcher for the Saudi – Central Bank, who identified problems which are et hical in nature and existed in their organization, currently affecting job satisfaction and productivity of employees. Ethical Issues in Business Organizations Ethical behavior have been identified to manifest actions that are â€Å"morally accepted as "good" and "right" as opposed to "bad" or "wrong" in a particular setting† (Sims, 1992, 506). ... According to Martires and Fule (2004), the culture of an organization influences the ethical climate that pervades. Organizational culture is a set of symbols, myths, ceremonies that reflect the underlying values and beliefs of the organization or its work force. This statement is supported by Hunt (1991) and Schneider and Rentsch (1991) who emphasized that there are factors that influence diversity in ethical climates of organizations, to wit: â€Å"personal self-interest, company profit, operating efficiency, individual friendships, team interests, social responsibility, personal morality, rules and standard procedures, and laws and professional codes† (cited in Sims, 1992, 510). As such, more detailed ethical issues facing human resources in organizations, particularly between managers and employees, are revealed by CiteHR (n.d.) to wit: (1) â€Å"discrimination issues include discrimination on the bases of age (ageism), gender, race, religion, disabilities, weight and att ractiveness; (2) issues surrounding the representation of employees and the democratization of the workplace: union busting, strike breaking; (3) issues affecting the privacy of the employee: workplace surveillance, drug testing; (4) issues affecting the privacy of the employer: whistle-blowing; (5) issues relating to the fairness of the employment contract and the balance of power between employer and employee: slavery, indentured servitude, employment law; and (6) occupational safety and health† (CiteHR, n.d., par. 1). In the case of the legal researcher for the Saudi – Central Bank, the ethical issue that existed between managers and employees was manifested in the way the manager discriminated against underperforming employees which further

Tuesday, October 15, 2019

Macroeconomics Discussion Essay Example | Topics and Well Written Essays - 750 words

Macroeconomics Discussion - Essay Example Money supply is the money circulating in the economy which is created by the FED, the depositors, and investors. Each of the 12 Federal Reserve banks perform the following: a. clear checks; b. issue new currency; c. withdraw damaged currency from circulation; d. administer and make discount loans to banks in their districts; e. evaluate proposed mergers and applications for banks to expand their activities; f. act as intermediaries between the business community and the Fed; g. examine bank holding companies and state-chartered banks; h. collect data on local business conditions; i. use their staff of professional economist to research topics related to monetary policy (Mishkin 369- 370). The money supply can be changed by increasing our deposits held by banks. This money creates a repercussion of effects in the economy when borrowed by companies who use this for their operations. Through the money multiplier, the invested money could increase employment an output more than its actual value. (3.) You are appointed as the chair of FRB. Congratulations! Chair, economy is in recession what are the policy measures you will undertake to push GDP toward potential GDP What are the problems of implementing monetary policy in practice Under an expansionary policy, the central bank must increase the money supply and lower the short- term interest rates. The Fed can engage in the following: a. open market purchase which expands reserves and monetary base; b. lower the discount rate which encourages borrowing by banks; or c. lower the reserve requirements among banks. Part Three: write a few sentences summarizing what you have learned and how learning this will help you personally. :) Thanks!! The most important thing which I have learned so far is the interdependence of the players in an economy. It is very important to note that the action of one player can have a tremendous effect in other sectors. Learning the functions of money, how money is controlled and managed, and how it can be used to stimulate or slow down the economy is really something very interesting to me. Knowing that my actions can influence the economy, I can now align my decisions in order to help the FED to achieve its economic goals. This is very important noting the forecasted downturn in the US economy in the coming future. Part Four:What is money supply, M1 and M2 which definition of money supply is more liquid and why M1 is the narrowest measure of money which includes currency, checking account deposits and travelers checks. The M2 includes the M1 plus other assets that have check-writing features such as small-denomination time deposits,

Monday, October 14, 2019

Control system for microgrid

Control system for microgrid Abstract In this study an example of a microgrid composed of diesel generator and two uninterruptable power supply systems is considered. This microgrid installed in the three buildings of the Tallinn University of Technology. This paper deals with how to implement a distributed control and monitoring system based on the Ethernet network in the microgrid. The paper describes a control strategy to implement both grid connected and islanded operation modes of the microgrid. Keywords Control system, diesel generator, microgrid Introduction Distributed generation (DG) is becoming an increasingly attractive approach to reduce greenhouse gas emissions, to improve power system efficiency and reliability, and to relieve todays stress on power transmission and distribution infrastructure [1]. Distributed generation encompasses a wide range of prime mover technologies, such as internal combustion engines, gas turbines, microturbines, photovoltaic, fuel cells and windpower [32]. A better way to realize the emerging potential of DG is to take a system approach which views generation and associated loads as a microgrid [21]. Microgrid is a concept of defining the operation of distributed generation, in which different microsources operate as s single controllable system that provides power and heat to a cluster of loads in the local area [3], [8] [9]. A well designed microgrid should appear as an independent power system meeting the power quality and reliability requirements [3]. The primary goal of microgrid architectures is to significantly improve energy production and delivery to load customers, while facilitating a more stable electrical infrastructure with a measurable reduction in environmental emissions [10]. The most positive features of microgrids are the relatively short distances between generation and loads and low generation and distribution voltage level. The main function of a microgrid is to ensure stable operation during faults and various network disturbances. The microgrid is a promising concept in several fronts because it [18]: provides means to modernize todays power grids by making it more reliable, secure, efficient, and de-centralized; provides systematic approaches to utilize diverse and distributed energy sources for distributed generation; provides uninterruptible power supply functions; minimizes emissions and system losses. Despite many advantages of microgrid there remain many technical challenges and difficulties in this new power industry area. One of them is the design, acceptance, and availability of low-cost technologies for installing and using microgrids [4]. The increased deployment of power electronic devices in alternative energy sources within microgrids requires effective monitoring and control systems for safe and stable operation while achieving optimal utilization of different energy sources [35]. Microgeneration suffers from lack of experience, regulations and norms. Because of specific characteristics of microgrids, such as high implication of control components, large number of microsources with power electronic interfaces remains many difficulties in controlling of microgrids. Realization of complicated controlling processes in microgrids requires specific communication infrastructure and protocols. During the process of microgrid organization many questions concerning the protection and safety aspects emerge. Also, it is required to organize free access to the network and efficient allocation of network costs. The predominant existing distributed generation is based on an internal combustion engine driving an electric generator [36]. To investigate various aspects of integration of alternative energy sources such as conventional engine generators, this paper proposes a prototype of the microgrid for three academic buildings at the Tallinn University of Technology which consists of a diesel generator, and batteries storage with power electronic interface. The main goal of this work is to design an intelligent control system of the microgrid that is efficient enough to manage itself for power balance by making use of state of the art communication technology. Moreover, the aim of this paper is to describe the control strategy of the microgrid operation in both stagy state modes. This control system enables the microgrid system to balance the electric power demand and supply and to simultaneously control the state of power network. Microgrid Theoretical Background A microgrid is described as a small (several MW or less in scale) power system with three primary components: distributed generators with optional storage capacity, autonomous load centers, and system capability to operate interconnected with or islanded from the larger utility electrical grid [10], [11]-[13]. According to [39], [22], multiple facility microgrids span multiple buildings or structures, with loads typically ranging between 2MW and 5MW. Examples include campuses (medical, academic, municipal, etc), military bases, industrial and commercial complexes, and building residential developments. Microgrids include several basic components for operation [3], [4]. An example of a microgrid with is illustrated in Fig.1. Distributed Generation Distributed generation units [1] are small sources of energy located at or near the point of use. There are two basic classes of microsources; one is a DC source (fuel cells, photovoltaic cells, etc.), the other is a high frequency AC source (microturbines, reciprocating engine generators, wind generators), which needs to be rectified. An AC microgrid can be a single-phase or a three-phase system. It can be connected to low voltage or medium voltage power distribution networks. Storage Devices Distributed storage technologies are used in microgrid applications where the generation and loads of the microgrid cannot be exactly matched. Distributed storage provides a bridge in meeting the power and energy requirements of the microgrid. Distributed storage enhances microgrid systems overall performance in three ways. First, it stabilizes and permits DG units to run at a constant and stable output, despite load fluctuations. Second, it provides the ride through capability when there are dynamic variations of primary energy (such as those of sun, wind, and hydropower sources). Third, it permits DG to seamlessly operate as a dispatchable unit. Moreover, energy storage can benefit power systems by damping peak surges in electricity demand, countering momentary power disturbances, providing outage ridethrough while backup generators respond, and reserving energy for future demand. There are several forms of energy storage, such as the batteries, supercapacitors, and flywheels. Interconnection Switch The interconnection switch is the point of connection between the microgrid and the rest of the distribution system. New technologies in this area consolidate the various power and switching functions (power switching, protective relaying, metering, and communications) traditionally provided by relays, hardware, and other components at the utility interface into a single system with a digital signal processor. The interconnection switches are designed to meet grid interconnection standards. Control System The control system of a microgrid is designed to safely operate the system in grid-parallel and stand-alone modes. This system may be based on a central controller or imbedded as autonomous parts of each distributed generator. When the utility is disconnected, the control system must control the local voltage and frequency, provide (or absorb) the instantaneous real power difference between generation and loads, provide the difference between generated reactive power and the actual reactive power consumed by the load, and protect the internal microgrid. Structure of the Proposed Microgrid The microgrid installed in three buildings of the Tallinn University of Technology (TUT): Faculty of Power Engineering, TUT Library, School of Economics and Business Administration. Consequently, according to the classification given in [22], this power system can be defined as a multiple facility microgrid. Fig.2 illustrates the various components of the power system of the microgrid at TUT. The structure of the microgtid for the campuses of the TUT is proposed. Fig.3 shows a schematic of the power system. Microgrid systems targeted in this study are autonomous areas having the power demand of several kilowatts including a diesel generator, two uninterruptable power supply (UPS) systems with batteries storage, and loads. They are connected to the power electronic interface forming local AC network with 230V, 50Hz. The diesel generator is used as the main distributed energy resource in this microgrid. It has a nominal power of 176kW/220kVA, voltage of 240V/400V and maximum current of 318A. This generator is connected to the AC bus via the automatic relay logic (ARL2). The ARL2 is continuously observing it both sides: the main grid and the microgrid. If there is a fault in the general grid, the ARL2 will disconnect the microgrid, creating an energetic island. The battery banks (E1 and E2) are used as the distributed energy storage devices in the microgrid to insure continuous supply of the local load. They are interfaced to the electrical network through the two UPS systems: UPS1 (160kVA), and UPS2 (240kVA). Hence, we can conclude that the microgrid has two main possible operation modes: grid-connected and islanded mode. Main customers of the microgrid are the computers and servers located in the laboratories and office rooms in the three buildings of TUT. The clients in the Library Building (computers) are interfaced to the electrical network using ARL1. In addition, four experimental loads (Experimental loads 1..4) are used that can be connected to the distributed shield located in the Laboratory of Electrical Drives. The nine intelligent sensors (P1..P9) assign these loads. Their task is to measure electrical power and energy parameters of the network, such as voltage, current, power, energy, power factor and transmit this information to the controller. The microgrid is connected to the general city electricity grid using two two-section transformer substations (6000kV/400kV) located in the Faculty of Power Engineering and the School of Economics and Business Administration Buildings. Description of the Control System Taking into account the configuration and features of the power network of the Tallinn University of Technology, the control system structure for the microgrid is designed with the following specifications: the balance of electric power demand and supply of power network are provided; both the steady state modes and the transient performance of the microgrid are achieved. A block diagram of the hierarchical control system which is based on the multiagent technology [40], [41], is demonstrated in Fig.4. The design of the control system can be divided into hardware and software. The control structure of the microgrid has three levels: Operator console and application server; Central controller (CC); Local controllers (LC) and measuring devices. Operator console is a computerized workstation with special software which comprises of supply and demand calculation units, monitoring units, control schemes and dispatching units. The function block diagram of the software is shown in Fig.5. The operator console heads the hierarchical control system. Its main goals of are: to keep track of the whole system by monitoring the status of the communication nodes and generating units; to collect data from the measuring devices; to calculate supply and demand of power; to visualize information received; to display the basic modes of the microgrid; and to transfer control commands to the central controller. Application server is designed for archiving data received from the measuring devices. The main interface between the operator console and others communication nodes of the microgrid control system is the central controller. It is the main responsible for the management of the microgrid. for the optimization of the microgrid operation. The central controller operates in real time. Its main functions are: connection and disconnection of the microgrid, the synchronization process, the detachment of loads. In addition, the aims of the central controller are: to collect information from the measuring devices; to transfer data from the operator console and the application server; to manage the power supply switches; and to transmit the control commands to the local controllers. Group of the local controllers are related to the third hierarchical control level. They include microsource controller that located in the distributed resources of the microgrid. It manages active and reactive power production levels at the diesel generator. Moreover, the microsource controller is responsible for the maintaining desired steady-state and dynamic performance of the power network. The other local controllers are located in the two UPS systems. Their main goals are to provide management of charge of the batteries storage. Measuring process Information required by the proposed monitoring and control system is voltage, current, power, energy, and power factor measurements. Real-time information is acquired through the intelligent measuring devices located at the output of the energy source, at the input of each loads, and at the both UPS systems. In this system, Allen-Bradley Powermonitor 3000 [25] is used to measure these instantaneous values. It implements real-time power monitoring with 50 ms selectable update rate. Such operating information is displayed in real-time for monitoring and energy management purposes. Communication network A communication infrastructure is needed between the central controller and the local controllers [23]. The short geographical span of the microgrid may aid establishing a communication infrastructure using low-cost communications. The adoption of standard protocols and open technologies allows designing and developing modular solutions using off-the-shelf, low-cost, widely available, and fully supported hardware and software components. At the present time, many low cost microcontrollers include at least an Ethernet controller, standalone cheap controllers are also available. The main advantages of using Ethernet are: the transition from a centralized control to a distributed control; wiring reduction no need for point to point connections. This solution provides flexibility and scalability for low-cost implementations. Taking these into account, the Ethernet industrial protocol has been chosen in this microgrid as communication network for data transfer for all those control units. The amount of data to be exchanged between network controllers includes mainly messages containing set-points to LC, information requests sent by the MGCC to LC about active and reactive powers, and voltage levels and messages to control microgrid switches. The LC is responsible of collecting local information from the attached energy resource and takes some real-time decisions based on the control algorithm. The communication network of the control system is illustrated in Fig.6. Every communication node has to get registered to the master server. The node sends its information to the master server through diverse communication channel. Furthermore, this topology provides an opportunity for immediate control center access via remote consoles and web based laptops for necessary actions to be taken. To include new generation resources or storage devices in a flexible manner into the microgrid, multi-agent technologies [40] might be applied. The proposed hierarchical control scheme provides a flexible platform to make high level decisions. Control Strategy of Operation of the Microgrid A microgrid may operate either connected to the main grid or disconnected from it. There are two steady states of operation, grid-connected (Mode-G) and islanded (Mode-I). Furthermore, there are two transient modes of operation, transfer from Mode-G to Mode-I and transfer from Mode-I to Mode-G. The key issue of the control is how to maintain the voltage and frequency stability of the microgrid [20]. Grid-connected mode In the grid-connected operation mode, the main function of a DG unit is to control the output real and reactive power. The real and reactive power generated by a DG can be controlled through current or voltage regulation, thus the DG output power control schemes can be generally categorized as current-based and voltage-based power flow control [43]. During Mode-G operation, the voltage and frequency of the microgrid is set by the main grid. The aim of the uninterruptible power supply systems is to obtain energy backup as much as possible, so during Mode-G operation, the main grid, the microgrid or both of them, will charge the batteries [20]. In grid-connected mode the balance between the generation and the consumption as well as the control of the parameters of the system is guaranteed by the utility grid. Thus, generators are regulated with the criterion of optimized economic exploitation of the installation [23]. Concerning the programmable generator, the objective of the control is to optimize the microgrid performance. Islanded mode The MG operates autonomously, in a similar way to physical islands, when the disconnection from the main grid occurs [37]. When the grid is not present, the ARL2 disconnects the microgrid from the grid, starting the autonomous operation. The instant at which the intentional islanding occurs must be detected in order to the inverter changes between grid-connected to intentional island modes. The detection is achieved using an algorithm described in [23]. When the main distribution network is faulted, the fault current will flow into the main grid from the microgrid continuously. At the same time, the circuit breaker of microgrid should detect the frequency and voltage-drop, and open in time, which makes the microgrid disconnect automatically from the main grid and change to islanded operation mode. Diesel generator should adopt the reasonable control strategies to ensure the stability of frequency and voltage in microgrid [42]. While switched from Mode-G to Mode-I, the UPS system operates in voltage control mode, is setting the voltage and frequency of the microgrid through absorbing or releasing energy. In islanded mode, due to the unavailability of the utility grid, two requirements must be fulfilled: the power balance between the generation and the consumption and the control of the main parameters of the installation (voltage amplitude and frequency). In synchronous islanded mode this reference is the same as the grid voltage. This mode is also called synchronization mode and it is the mode that necessarily precedes a reconnection with the grid. The control system is responsible for assuring the power balance. In case of energy excess the management system can limit the output power of the diesel generators power in order to avoid the operation in extremely inefficient low power generation modes. On the contrary, if all the available power is not enough to feed the local loads, the management system will detach non-critical loads. The control system is voltage controlled and it regulates the main parameters of the system. The UPS systems sets the voltage and frequency of the islanded microgrid and maintains them within acceptable limits by injecting or absorbing active power and reactive power as required. As soon as the presence of mains is detected, the microgrid control system uses feedback information from the mains voltage to adjust the energy storage unit voltage and frequency control loops to synchronize the microgrid voltage with the main voltage of the main grid. Transition from Grid-Connected to Islanded Mode There are various islanding detection methods proposed for DG systems [44]. As mentioned above, there is a different control strategy when the laboratory-scale microgrid system operates in Mode-G or Mode-I. If there is a transition between these two modes, the control mode of the battery inverter will change. A switching circuit, as shown in Fig.7, is designed to realize this transition [20]. A load-voltage control strategy proposed by [23] is employed to provide the operation of the microgrid. Disconnection of the microgrid from the grid can be provoked by many causes, like unsatisfactory grid voltage (in terms of amplitude or waveform) or even economic aspects related to power price. In order to monitor grid voltage characteristics a Voltage monitoring module is required. This module measures continuously the rms grid voltage comparing it with a preestablished threshold value. When any of the phase voltages goes down the threshold value (0.9 pu in this case) the detection signal is activated. If 20 ms after the first detection this signal is still activated the microgrid must be disconnected from the utility grid and it must pass to islanded operation mode, otherwise the microgrid will remain connected to the utility grid. This way unnecessary islandings are avoided and selectivity is respected. A 20 ms time window has been chosen after verifying through experimental tests and standards [47] that a personal computer (which is considered as the most critical residential lo ad in this microgrid) is not affected by a 20 ms voltage interruption. As soon as the microgrid is disconnected from the grid, the programmable generator controller passes from a power control mode to a voltage control mode. Microgrid power consumption is also continuously measured in order to detach non-critical loads if there is no enough local available power. In addition if consumption or generation conditions are modified and it becomes possible to feed all the local loads, non-critical loads will be reconnected. Transition from Islanded to Grid-Connected Mode When the grid-disconnection cause disappears, the transition from islanded to grid-connected mode can be started. To avoid hard transients in the reconnection, the diesel generator has to be synchronized with the grid voltage [23]. The DG is operated in synchronous island mode until both systems are synchronized. Once the voltage in the DG is synchronized with the utility voltage, the DG is reconnected to the grid and the controller will pass from voltage control mode to current control mode. When the microgrid is working in islanded mode, and the ARL2 detects that the voltage outside the microgrid (in the grid) is stable and fault-free, we have to resynchronize the microgrid to the frequency, amplitude and phase of the grid, in order to reconnect seamlessly the microgrid. If the grid-disconnection cause disappears and the gridvoltage fulfills the desired requirements, the transition from islanded to grid-connected mode can be started. The grid voltage conditions will be again monitored by the Voltage monitoring module. This way if the grid voltage exceeds the threshold value the detection signal is deactivated. If 20 ms after the first detection the detection signal is still deactivated it means that utility grid has returned back to normal operating conditions and the microgrid can reconnect to the grid. However, before the reconnection, the microgrid has to be synchronized with the grid voltage in order to avoid hard transients in the reconnection. To do so, the microgrid operates in synchronous islanded mode during 100 ms with the aim of decoupling the reference variation and the physical grid reconnection transients. In this operating mode the voltage in the microgrid is set to the characteristics of the grid voltage, frequency and phase. Once the voltage in the microgrid is synchronized with the utility voltage the microgrid can be reconnected to the grid and the programmable generator controller will pass from a voltage control mode to a power control mode. In the same way if non-critical loads are detached they are also reconnected. In the presence of unplanned events like faults, microgrid separation from the MV network must occur as fast as possible. However, the switching transient will have great impact on microgrid dynamics. The microgrid functionalities as well as its control methods depend on the mode of operation [23]: Islanding of the MG can take place by unplanned events like faults in the MVnetwork or by planned actions like maintenance requirements. In this case, the local generation profile of theMG can be modified in order to reduce the imbalance between local load and generation and reduce the disconnection transient [48]. Conclusions In this paper the microgrid system installed at the Tallinn University of Technology, has been presented. The microgrid includes a diesel generator, batteries storage with power electronic interface. The architecture of the microgrid for the Tallinn University of Technology and a control system structure for the microgrid were proposed. Design of a control and monitoring system for a microgrid is presented in this paper. A hierarchical control scheme is proposed. This will enhance the reliability and stability of the microgrid on one end and will make microgrid an easy to use product on the other. Acknowledgement This paper was supported by the Project DAR8130 Doctoral School of Energy and Geotechnology II. References A.M.Borbely,J.F.Krieder, Distributed generation: the power paradigm for the new millennium, CRC Press, Boca Raton, Florida, 2001, 388p. P.Nabuurs, SmartGrids, European Technology platform, Strategic Deployment Document for Europes Electricity Networks of the Future, September 2008, 68p. R.Lasseter, Microgrids, Proceedings of 2002 IEEE Power Engineering Society Winter Meeting, vol.1, NewYork, NY, 2002, pp.305-308. B.Kroposki,T.Basso,R.DeBlasio, Microgrid Standards and Technologies, Power and Energy Society General Meeting Conversion and Delivery of Electrical Energy in the 21st Century, 2008, pp.1-4. P.Mazza, The Smart Energy Network: Electricitys Third Great Revolution, Jun. 2003. [online]. Available: http://www.microplanet.com/upload/pdf/SmartEnergy.pdf, 22p. J.A.Momoh, Smart Grid Design for Efficient and Flexible Power Networks Operation and Control, IEEE Power Energy Society Power Systems Conference and Exposition, Seattle, Washington, 2009, pp.1-8. A.Mehrizini-Sani,R.Iravani, Secondary Control for Microgrids Using Potential Functions: Modeling Issues, Conference on Power Systems (CIGRECanada2009), Toronto, Canada, 2009, pp.1-9. A.Mohamed, Microgrid modeling and online management, PhD thesis, Helsinki University of Technology, Helsinki, Finland, 2008, 169p. D.Yubing,G.Yulei,L.Qingmin,W.Hui, Modelling and Simulation of the Microsources Within a Microgrid, Electrical Machines and Systems (ICEMS 2008), Jinan, China, 2008, pp.2667-2671. C.M.Colson,M.H.Nehrir, A Review of Challenges to Real-Time Power Management of Microgrids, IEEE Power Energy Society General Meeting, Calgary, Canada, 2009, pp.1-8. C.M.Colson,M.H.Nehrir,C.Wang, Ant Colony Optimization for Microgrid Multi-Objective Power Management, IEEE Power Energy Society Power Systems Conference and Exposition, Seattle, Washington, 2009, pp.1-7. S.Ahn,S.Moon, Economic Scheduling of Distributed Generators in a Microgrid Considering Various Constraints, IEEE Power Energy Society General Meeting, Calgary, Canada, 2009, pp.1-6. C.A.Hernandez-Aramburo,T.C.Green,N.Mugniot, Fuel Consumption Minimization of a Microgrid, Industry Applications, IEEE Transactions, 2005, vol.41, no.3, pp.673-681. A.Arulampalam,M.Barnes,A.Engler,A.Goodwin,N.Jenkins, Control of power electronic interfaces in distributed generation Microgrids, International Journal of Electronics, vol.91, no.9, London, GB, 2004, pp.503-524. F.Pilo,G.Pisano,G.G.Soma, Neural Implementation of MicroGrid Central Controllers, IEEE International Conference on Industrial Informatics, New York, 2007, pp.1177-1182. R.H.Lasseter,P.Piagi, Control and Design of Microgrid Components, Final Project Report Power Systems Engineering Research Center (PSERC-06-03), 2006, p. 257. P.Piagi,R.H.Lasseter, Autonomous Control of Microgrids, IEEE Power Engineering Society General Meeting, Montreal, Canada, 2006, pp.1-8. F.Z.Peng,Y.W.Li,L.M.Tolbert, Control and Protection of Power Electronics Interfaced Distributed Generation Systems in a Customer-Driven Microgrid, IEEE Power Energy Society General Meeting (PESGM 2009), Calgary, Canada, 2009, pp.1-8. R.H.Lasseter,P.Piagi, Microgrid: A Conceptual Solution, IEEE 35th Power Electronics Specialists Conference (PESC2004), vol.6, Aachen, Germany, 2004, pp.4285-4290. Y.Che,Z.Yang,K.W.EricCheng, Construction, Operation and Control of a Laboratory-Scale Microgrid, 3rd International Conference Power Electronics Systems and Applications, (PESA2009), 2009, pp.1-5. R.Lasseter,A.Akhil,C.Marnay,J.Stephens,J.Dagle,R.Guttromson,A.S.Meliopoulous,R.Yinger,J.Eto, The CERTS MicroGrid Concept, CEC Consultant Report P500-03-089F. Sacramento, CA: California Energy Commission, 2003, 32p. M.Adamiak,S.Bose,Y.Liu,J.Bahei-Eldin,J.DeBedout, Tieline Controls in Microgrid Applications, Bulk Power System Dynamics and Control VII. Revitalizing Operational Reliability, 2007 REP Symposium, 2007, pp.1-9. H.Gaztanaga,I.Etxeberria-Otadui,S.Bacha,D.Roye, Real-Time Analysis of the Control Structure and Management Functions of a Hybrid Microgrid System, IEEE 32nd Annual Conference Industrial Electronics, (IECON2006), 2006, pp.5137-5142. A.Rà ¶Ãƒ ¶p(editor,reviser), Annual Report 2008 Department of Electrical Drives and Power Electronics, Tallinn: TUT Publishing, Estonia, 2009, 74p. http://www.ab.com/PEMS/pm3000.html http://www.rockwellautomation.com/rockwellsoftware/assetmgmt/energymetrix/sysreq.html http://www.ab.com/programmablecontrol/pac/controllogix/ Design and Implementation of a Control System for a Microgrid involving a Fuel Cell Power Module A. P. Agalgaonkar, S. V. Kulkarni, S. A. Khaparde, and S. A. Soman, Placement and Penetration of Distributed Generation under Standard Market Design, International Journal of Emerging Electric Power Systems, Volume 1, Issue 1 2004; Article 1004 TOWARDS A SMART NETWORK IN A BUSINESS DISTRICT. COMBINING DISPERSED UPS WITH DISTRIBUTED GENERATION Designing the Optimal Stand alone Power System which uses Wind Power and Solar Radiation for Remote Area Object Placement and Penetration of Distributed Generation under Standard Market Design Off-Grid Diesel Power Plant Efficiency Optimization and Integration of Renewable Energy Sources Model. Validation and Coordinated Operation of a Photovoltaic Array and a Diesel Power Plant for Distributed Generation Distributed monitoring and control of future power systems via g