Volume 2 Issue 3

Authors: Guanlei Xu; Fengwang Lang; Shipeng Su

Abstract: The paper begins to study with a view to the character of the sea clutter below different evaporation duct. Frist, the paper simulated and analyzed the atmospheric refraction environment on Dartmouth Canada in 1993. Then, two data files with different strength duct are used to analyze sea clutter strength feature. Moreover, the mechanism of sea clutter is discussed through the ray tracing, the gazing angles and the propagation loss. The main results and contributions are as following: (1) There are evaporation duct phenomena in the Dartmouth experiments; (2) The sea clutter strength in the stronger duct is larger than that in the weaker duct; (3) The character of power spectrum. In the weaker duct, there is a larger value in certain frequency, and the aim’s character is very distinctive. Then, in the stronger duct, power spectrum is uniformity, and the aim’s character is illegibility. (4) To discuss the cause of formation from the character of electromagnetic wave propagation, relative to the weaker duct, the paper found that the electromagnetic wave ray propagates farther, the gazing angles are bigger, and the propagation loss is smaller. All of these are the reasons why sea clutter becomes stronger.

Keywords: The Sea Clutter; Atmospheric Duct; Ray Tracing; Gazing Angles; Propagation Loss

Doi:

Authors: Mohammed Ghanbari; Martin Fleury; Laith Al-Jobouri

Abstract: Mobile, broadband wireless access is increasingly being used for video streaming. This paper is a study of the impact of intra-refresh provision upon a robust video streaming scheme intended for WiMAX. The paper demonstrates the use of intra-refresh macroblocks within inter-coded video frames as an alternative to periodic intra-refresh video frames. In fact, the proposed scheme combines intra-refresh macroblocks with data-partitioned video compression, both error resilience tools from the H.264 video codec. Redundant video packets along with adaptive channel coding are also used to protect video streams. In harsh wireless channel conditions, it is found that all the proposed measures are necessary. This is because error bursts, arising from both slow and fast fading, as well as other channel impairments, are possible. The main conclusions from a detailed analysis are that: because of the effect on packet size it is important to select a moderate quantization parameter; and because of the higher overhead from cyclic intra macroblock line update it is better to select a low percentage per frame of intra-refresh macroblocks. The proposed video streaming scheme will be applicable to other 4G wireless technologies such as LTE.

Keywords: Broadband Wireless Access; Error Resilience; H.264 Codec; Video Streaming; WiMAX

Doi:

Authors: Natarajan Somasundaram; Jeong A Lee; Y V Ramana Rao; Ramadass Narayanadass; Farhad Mehdipour

Abstract: With continued scaling of silicon process technology, producing reliable electronic components in extremely denser technologies pose a challenge. Further, the systems fabricated in deep sub-micron technology are prone to intermittent or transient faults, causing unidirectional errors, upon exposure to ionizing radiations during system operation. The ability to operate in the intended manner even in the presence of faults is an important objective of all electronic systems. In order to achieve fault-tolerance, each module of the system must be fault-tolerant by possessing run-time (or online) fault detection capabilities. Totally Self-checking (TSC) circuits permit online detection of hardware faults. The Scalable Error Detection Coding (SEDC) algorithm used to design self-checking circuits with faster execution and lesser latency overhead for use in fault-tolerant reconfigurable architecture is presented. SEDC algorithm is formulated and architecture is designed in such a way that for any input binary data length, only area is scaled, with a constant latency of 2 logic gates and requires only a single clock cycle for generating SEDC code. It is shown that the proposed SEDC algorithm is found to be significantly efficient than the existing unidirectional error detection techniques in terms of speed, latency, area and achieving 100% error detection.

Keywords: Fault Tolerance; Totally Self-Checking Circuits; Dependable Architecture; Error Detection Coding; Unidirectional Errors

Doi:

Authors: Ming-Tsai Hsu; Chia-Wei Huang; Chun-Shian Tsai

Abstract: The automobile functions and reliability are increasing rapidly, but the price is decreasing. Therefore, more and more ECUs (Electronic Control Unit) will be conducted into the car. Today, there are about more than 50 ECUs being used in the high-level automobile, and people designed CAN Bus to decrease development cost for car such that ECUs can be worked coordinately. By applying the CAN Bus, we can make the ECUs communicate with each other, moreover, the data message will also be sent to each control device. Therefore, it is an important paradigm for CAN Bus in the CAR-based real time motor control system. In this paper, we first introduce the motor control that is to research how the control message is delivering out through the CAN Bus. Based on this technology, algorithm for motor control is also mentioned. Besides, to achieve much safety and reliability for the real-time motor control, we also research the automotive software framework for ERIKA Enterprise, and through the conducting of ERIKA software, the real time operating system for OSEK can be ported (embedded) into the target ECU hardware in a very easy way. Finally, we propose a demonstrative application for enhanced CAN (ECAN) bus network connection to show how real-time transmission of data frames through ECAN bus network connection is guaranteed by ERIKA Enterprise. In the other words, the motor control for CAR can also be managed by ERIKA to keep the data transmitting in more safety, reliability and real-time.

Keywords: Automobiles; Real-Time Operating System (RTOS); OSEK/VDX; Embedded System; CAN (Controller Area Network); Motor Control

Doi:

Authors: Koby Biller

Abstract: Storage management is the control of the capacity, policy and events in storage devices. Its purpose is to achieve maximum efficiency when allocating or deleting files, while always retaining an accurate view of the repository. Storage management was developed along with the development of storage devices to solve new problems that might be encountered withnew storage technologies and software changes. One technique developed to increase storage availability even when storage utilization has increased is “Disk Fragmentation”. Fragmentation was invented to split file allocation within the storage device in such a way that there would always be free space for new files if the device capacitywas not fully used. The fragmentation concept works perfectly with a new device, but over time the fragments of free space become very small and every file is split into smaller fragments. The effort to access an existing file is added to the effort of finding available fragments, and the result is a delay in the service. Excessive fragmentation of data in storage devicescarries with it a restriction which represents amajor problem underlying computer slowdown andother unpredictable storage-related symptoms. Thispaper presents the “Data Quality” approach,which measures the above restriction on the basis of research performed during the period 2005 to 2008, in which wesurveyed theeffectiveness of ourpatented method for measuring thequality of data affected by diskfragmentation. The approach incorporates a novel way to managefragmented data using file system statistics. We show clear advantages over the current alternatives such asreplacing the device or “hard-wiping” it. Furthermore, we present a unified scale that allows comparison of data quality on an organization-wide level.

Keywords: Disk Fragmentation; Preventive Maintenance; Data Disorder Measurement; Defragmentation; Storage Management

Doi: