First, a Dynamic Hirschberg Test is conducted to detect strabismus. The test begins with both eyes following a moving fixation target under binocular viewing, and during the test each eye is designed to be unconscientiously occluded which forces refixation in strabismus subjects and reveals latent strabismus. Photoscreening images taken under monocular viewing are used to calculate deviations from the expected binocular eye movement path. A significant eye movement deviation from binocular to monocular viewing indicates the presence of strabismus.

Second, a novel binocular adaptive photorefraction (APR) approach is developed to characterize the retinal reflex intensity profile according to the eye's refractive state. This approach calculates the retinal reflex profile by integrating the retinal reflex intensity from a coaxial and several eccentric photorefraction images. Theoretical simulations evaluate the influence from several human factors. An experimental APR device is constructed with 21 light sources to increase the spherical refraction detection range. The additional light source angular meridians detect astigmatism. The experimentally measured distribution is characterized into relevant parameters to describe the ocular refraction state.

Last, the APR design is further applied to detect vision problems that suffer from high-order aberrations (e.g. cataracts, dry eye, keratoconus). A monocular prototype APR device is constructed with coaxial and eccentric light sources to acquire 13 monocular photorefraction images. Light sources projected inside and along the camera aperture improve the detection sensitivity. The acquired reflex images are then decomposed into Zernike polynomials, and the complex reflex patterns are analyzed using the Zernike coefficient magnitudes.

]]>One of the above mentioned approaches used to construct the multiple stochastic integrals with respect to stable random measures is the Lebesgue-Dunford type construction. This approach reduces the problem of stochastic integration to the problem of integration with respect to a vector measure. Using this approach Krakowiak and Szulga (1985) developed multiple stochastic integrals of Banach valued functions with respect to symmetric and also nonsymmetric stable random measures. In this dissertation, using an approach similar to that of Krakowiak and Szulga (1985), we develop multiple stochastic integrals with respect to symmetric as well as with respect to (nonsymmetric) strictly semistable random measures with index of stability α ∈ (1, 2). Our methods, in the nonsymmetric case, yield results on multiple stochastic integrals relative to strictly stable random measure with index α ∈ (1, 2) considered in [10, 13].

The most crucial role in the development of the integrals here is played by the inequalities (2.29). In these inequalities we establish a comparison theorem between the moments of the integrals of certain simple functions relative to the strictly semistable random measure and the corresponding moments of integrals of these functions relative to symmetric stable random measure. Once these inequalities are established, the methods of construction of the integrals here are similar to those used by Krakowiak and Szulga in [10, 13] to develop the integrals relative to symmetric stable random measure.

In Chapter I, we collect the notation, definitions, and known results that are basic to this dissertation. In Chapter II, we develop necessary tools and prove the crucial inequalities mentioned above. In the first part of Chapter II, we prove a comparison theorem for tail probabilities of nonsymmetric semistable random measures. This uses a distributional property of a strictly semistable random variable. In Chapter III, we define the multiple stochastic integrals of certain Banach valued Borel measurable functions with respect to a strictly semistable random measure of index a. Then, we show that the class of Banach valued integrable functions relative to a semistable random measure of index α coincides with the class of Banach valued integrable functions relative to a symmetric stable random measure of index α.

]]>Historically, production of optical pieces of the quality described required many repetitions of selective hand-lapping, polishing and measuring. In the past ten years production of these pieces has been enhanced by machining with diamond cutting tools on precision numerically controlled (NC) turning machines. These machines are capable of generating axisymmetric surfaces competitive in quality to those produced by conventional means without the expensive hand-work. This experiment describes the design and testing of a prototype system for machining an off-axis parabolic sector by on-axis turning. The prototype system utilized an auxiliary slide. which carried the cutting tool. The slide was supported by captive air bearings and was driven by a linear motor.

A transformation was performed on the parabola to describe the auxiliary slide motion in coordinates centered in the off-axis sector. A Fourier expansion resulted in a scheme which permits the slide position commands to be generated in real time. The use by the signal generator of position information from the base machine transverse slide along with zero position and tachometer signals from the spindle insured synchronization between all motions.

A test part was machined with the prototype system. The contour accuracy of the test part was measured between -.0005 and +.0009 inch. Surface finish varied from 7 microinches RMS near the part center to approximately 60 microinches near the outer edge.

Two important factors contributed to the workpiece inaccuracy. An electrical noise level equivalent to 15 to 20 microinches of vibration detracted from the surface finish and denied the use of a diamond cutting tool. A structural resonance in the linear motor prevented the use of higher position loop gain which resulted in increased following error. The system did serve as a proof-of-principle, however, and also produced a workpiece requiring less handwork than would have been required conventionally.

]]>When building support of a real time Flight Risk Assessment and Mitigation System (FRAMS), a sequential multi-stage approach is developed. The whole risk management process is considered in order to improve the safety of each flight by integrating AHP and FTA technique to describe the framework of all levels of risks through risk score. Unlike traditional fault tree analysis, severity level, time level and synergy effect are taken into account when calculating the risk score for each flight. A risk tree is designed for risk data with flat shape structure and a time sensitive optimization model is developed to support decision making of how to mitigate risk with as little cost as possible. A case study is solved in reasonable time to approve that the model is practical for the real time system.

On the other hand, an intense competitive environment makes cost controlling more and more important for airlines. An integrated approach is developed for improving the efficiency of reserve crew scheduling which can contribute to decrease cost. Unlike the other technique, this approach integrates the demand forecasting, reserve pattern generation and optimization. A reserve forecasting tool is developed based on a large data base. The expected value of each type of dropped trip is the output of this tool based on the predicted dropping rate and the total scheduled trips. The rounding step in current applied methods is avoided to keep as much information as possible. The forecasting stage is extended to the optimization stage through the input of these expected values. A novel optimization model with column generation algorithm is developed to generate patterns to cover these expected level reserve demands with minimization to the total cost. The many-to-many covering mode makes the model avoid the influence of forecasting errors caused by high uncertainty as much as possible.

]]>In this dissertation, we re-examine the mathematical foundation and underlying philosophy of hierarchical, density based, centroid-based clustering algorithms, and reformulate them to incorporate physical information to solve a big variety of transportation problems. In particular, we first show an example that a density-based data-driven geohash method can gain 40 seconds accuracy in ETA prediction. We then design a network space density-based clustering algorithm, Dijk-DBSCAN, which expands density based clustering from n-dimensional space to a transportation network space. We will show that Dijk-DBSCAN makes accident and other types of hotspot detection more accurate. Further, we present an online step-wise regression based clustering strategy, to collect vehicles' movement trajectory at low storage cost while maintaining high accuracy. We also explore clustering over arbitrary geometric shapes, and develop a hierarchical clustering framework. This algorithm can be applied to allocate resources over large road networks with an energy-efficiency constraint. In the last section, we present a hierarchical clustering and greedy algorithm that solves general vehicle routing problems, with pickup and delivery, and with time windows. It handles mutually constrained location and time information by clustering orders (e.g. orders of package delivery, passengers' ride requests) and vehicles. The simple implementation and light computational cost make it superior to traditional optimization solvers, and enables real-time and large scale deployment in applications such as city-wide ride-sharing, time constrained package delivery, etc. Our work bridges the gap between classical clustering algorithms and specific data types and problem configurations in transportation domain. All our algorithms are designed to be highly computational efficient, and easily adaptable to other similar problems.

]]>Results show that the existence of SFs in SiC considerably affects the defect configurations, which modifies the local atomic and electronic structures. Both changes influence the local energy landscape, and thus affecting the formation and migration energy of defects in the SFs region. The lower barriers for Si interstitial diffusion near the faults may be responsible for the enhanced defect annihilation observed under irradiation in 3C-SiC with high densities of stacking faults.

In the KTaO_{3}, the formation-dependent site preferences for oxygen vacancies are expected to occur under epitaxial strain, which can result in orders of magnitude differences in the vacancy concentration on different oxygen positions. The diffusion behavior of oxygen vacancy in strain fields is also considered. In contrast to the strain-enhanced intra-plane diffusion, it is found that the inter-plane diffusion, which perpendicular to the strained plane, is impeded under the strain field.

CeO_{2} is considered as the surrogate for mixed oxide fuel. In this case, fission gases, such as Xenon (Xe), are trapped near the GBs and form the gas bubbles. Our results show that the Xe segregation propensity is reduced as the size of trap sites increases. In the hyper-stochiometric conditions, the solubility of Xe trapped in the GB is significantly higher than that in the bulk, suggesting Xe concentration would be higher than that in the bulk. The activation energies for Xe diffusion in the GB are lower than those in the bulk, indicating that the mobility of Xe atom in the GB is higher than that in the bulk.

The results of the finite mixture analysis (FMA) suggest that at least three phenotypically distinct groups of EA existed between 1828 and 1984. This study was not able to determine with certainty whether these EA groups represented particular racialized groups. Multiple analysis of variance (MANOVA) tests found a significant race effect with regard to late childhood /adolescent stress during the Early (1828-1881) period between EA and AA. AA had significantly smaller TR VNC diameters, suggesting they also experienced significantly more late childhood/adolescent stress. MANOVA tests also found significant sex effects during the Intermediate (1914-1945) and Late (1946-1984) periods.

One-way analysis of variance (ANOVA) tests showed that early childhood stress, as demonstrated by AP VNC diameter and LEH decreased over time. ANOVA tests also showed that late childhood/adolescent stress, as demonstrated by TR VNC diameter, increased over time. The findings in this study suggest that explorations into the possible effects of racialization on population heterogeneity and stress heterogeneity are warranted and should also consider the intersection of various other identities such as sex, gender, class, language, religion, and nationality.

]]>In the first part, we recall the notion of commuting squares which were introduced by Popa and arise naturally as invariants in Jones' theory of subfactors. We review some of the main known examples of commuting squares such as those constructed from finite groups and from complex Hadamard matrices. We also recall Nicoara's notion of defect which gives an upper bound for the number of continuous deformations in the space of commuting squares. Finally, we prove new formulas that lead to computations of defects.

In the second part, we prove a finiteness result for circulant core Hadamard matrices (and thus, for their associated commuting squares). We show that the number of such matrices is finite when the order of the matrix is *p*+1 with *p* a fixed prime number. We then discuss concrete examples of these matrices of small orders.

In the third part, we give an explicit construction of multi-parametric analytic families of commuting squares obtained as deformations of group commuting squares. In the particular case of cyclic groups of non-prime orders, this gives multi-parametric families of complex Hadamard matrices containing the Fourier matrix. This result expands on the work of Nicoara and White. We then give bounds on the number of parameters in any family stemming from our construction method. We also discuss other parametric families containing the Fourier matrix, some of which include our families as (equivalent) sub-families.

In the last part, we construct a new class of commuting squares which we call bismash commuting squares. They are obtained from bismash product Hopf algebras based on exact factorizations of finite groups, *L*. We then investigate the defect of a bismash commuting square which leads us to the conjecture that the defect of the commuting square is equal to the defect of the group *L*. We prove this conjecture when *L* is the direct or semidirect product of two proper subgroups.

Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches.

]]>We are interested in designing new algorithmic tools to apply sensor fusion techniques in the particular signal representation of sparse coding which is a favorite methodology in signal processing, machine learning and statistics to represent data. This coding scheme is based on a machine learning technique and has been demonstrated to be capable of representing many modalities like natural images. We will consider situations where we are not only interested in support of the model to be sparse, but also to reflect a-priorily known knowledge about the application in hand.

Our goal is to extract a discriminative representation of the multimodal data that leads to easily finding its essential characteristics in the subsequent analysis step, e.g., regression and classification. To be more precise, sparse coding is about representing signals as linear combinations of a small number of bases from a dictionary. The idea is to learn a dictionary that encodes intrinsic properties of the multimodal data in a decomposition coefficient vector that is favorable towards the maximal discriminatory power.

We carefully design a multimodal representation framework to learn discriminative feature representations by fully exploiting, the modality-shared which is the information shared by various modalities, and modality-specific which is the information content of each modality individually. Plus, it automatically learns the weights for various feature components in a data-driven scheme. In other words, the physical interpretation of our learning framework is to fully exploit the correlated characteristics of the available modalities, while at the same time leverage the modality-specific character of each modality and change their corresponding weights for different parts of the feature in recognition.

]]>This research was conducted at the Anthropology Research Facility in Knoxville, Tennessee, an outdoor laboratory for the study of human decomposition. Body donors with known residence histories (n=44) plus two additional donors at the Forensic Anthropology Research Facility in San Marcos, Texas, were enrolled in the study, and carbon, nitrogen, hydrogen, oxygen, and strontium isotopes from human hair samples of these donors were analyzed. Postmortem exposure times for the study ranged from 22 days to more than three years. Results of the study revealed that carbon and nitrogen isotope ratios in human hair, commonly used to make dietary inferences, undergo little change over time and are more reliable than hydrogen, oxygen, and strontium isotope ratios, which are impacted by the depositional environment. This study revealed that isotope ratios of human hair can change postmortem and are influenced by geographic placement location, surface or burial placement, and duration of exposure.

]]>The optimized methods and experimental/bioinformatics techniques described in this dissertation should be broadly extendable to proteome characterization/protein interaction examination in various systems.

]]>As the railways transformed from invention to commodity to industry, popular nineteenth-century engineering biographies shaped the story of the railway and its inventors, engineers, and investors into inflated myths of personal success and smooth progress. Valorizing self-interest over government interference, these narratives reinforced the laissez-faire arrangement between Parliament and railway companies. However, by mid-century redundant and abandoned lines, accidents, financial mania, and social immobility sparked debates about the detriment of lax regulation on public welfare. Along with newspapers and journals, fiction contributed to this debate. Works by Dickens, Gaskell, Riddell, Oliphant and others addressed these regrets by revising the story of the railways, correcting the strategic selection, erasure, and exaggeration of those early myths. In chapters focused on the expansion of lines, traveler safety, financial investment, and social mobility, this dissertation showcases authors who amplified the voice of public opinion in their fiction during a time when Parliament and boards of directors dominated the conversation. Demonstrating that to regret the past is also to envision a better future, such fiction provided the space within which Victorians could imagine balance between corporate, state, and public interests.

]]>