BCIT Citations Collection | BCIT Institutional Repository

BCIT Citations Collection

Pages

New approaches to designing genes by evolution in the computer
The field of Evolutionary Computation (EC) has been inspired by ideas from the classical theory of biological evolution, with, in particular, the components of a population from which reproductive parents are chosen, a reproductive protocol, a method for altering the genetic information of offspring, and a means for testing the fitness of offspring in order to include them in the population. In turn, impressive progress in EC - understanding the reasons for efficiencies in evolutionary searches - has begun to influence scientific work in the field of molecular evolution and in the modeling of biological evolution (Stemmer, 1994a,b; van Nimwegen et al. 1997; 1999; Crutchfield & van Nimwegen, 2001). In this chapter, we will discuss how developments in EC, particularly in the area of crossover operators for Genetic Algorithms (GA), provide new understanding of evolutionary search efficiencies, and the impacts this can have for biological molecular evolution, including directed evolution in the test tube. GA approaches have five particular elements: encoding (the ‘chromosome’); a population; a method for selecting parents and making a child chromosome from the parents' chromosomes; a method for altering the child’s chromosomes (mutation and crossover/recombination); criteria for fitness; and rules, based on fitness, by which offspring are included into the population (and parents retained). We will discuss our work and others’ on each of these aspects, but our focus is on the substantial efficiencies that can be found in the alteration of the child chromosome step. For this, we take inspiration from real biological reproduction mechanisms., Book chapter, Published.
Noise in the segmentation gene network of Drosophila, with implications for mechanisms of body axis specification
Specification of the anteroposterior (head-to-tail) axis in the fruit fly Drosophila melanogaster is one of the best understood examples of embryonic pattern formation, at the genetic level. A network of some 14 segmentation genes controls protein expression in narrow domains which are the first manifestation of the segments of the insect body. Work in the New York lab has led to a databank of more than 3300 confocal microscope images, quantifying protein expression for the segmentation genes, over a series of times during which protein pattern is developing (http://flyex.ams.sunysb.edu/FlyEx/). Quantification of the variability in expression evident in this data (both between embryos and within single embryos) allows us to determine error propagation in segmentation signalling. The maternal signal to the egg is highly variable, with noise levels more than several times those seen for expression of downstream genes. This implies that error suppression is active in the embryonic patterning mechanism. Error suppression is not possible with the favoured mechanism of local concentration gradient reading for positional specification. We discuss possible patterning mechanisms which do reliably filter input noise., Peer-reviewed article, Published.
A novel Volt-VAR Optimization engine for smart distribution networks utilizing Vehicle to Grid dispatch
In recent years, Smart Grid technologies such as Advanced Metering, Pervasive Control, Automation and Distribution Management have created numerous control and optimization opportunities and challenges for smart distribution networks. Availability of Co-Gen loads and/or Electric Vehicles (EVs) enable these technologies to inject reactive power into the grid by changing their inverter’s operating mode without considerable impact on their active power operation. This feature has created considerable opportunity for distribution network planners to explore if EVs could be used in the distribution network as reliable VAR suppliers. It may be possible for network operators to employ some EVs as VAR suppliers for future distribution grids. This paper proposes an innovative Smart Grid-based Volt-VAR Optimization (VVO) engine, capable of minimizing system power loss cost as well as the operating cost of switched Capacitor Banks, while optimizing the system voltage using an improved Genetic Algorithm (GA) with two levels of mutation and two levels of crossover. The paper studies the impact of EVs with different charging and penetration levels on VVO in different operating scenarios. Furthermore, the paper demonstrates how a typical VVO engine could benefit from V2G’s reactive power support. In order to assess V2G impacts on VVO and test the applicability of the proposed VVO, revised IEEE-123 Node Test Feeder in presence of various load types is used as case study., Article, Published. Received 24 May 2014, Revised 23 July 2015, Accepted 29 July 2015, Available online 8 August 2015.
NRC-IRC develops evaluation protocol for innovative vapour barrier
Vapour barriers were originally intended to keep building assemblies from getting wet, but they can sometimes end up preventing assemblies from drying out. An innovative new product to manage moisture accumulation in the building envelope, however, may be able to address both issues: while the product acts as a vapour barrier under most conditions, it also allows excess moisture to escape. The Canadian Construction Materials Centre (CCMC) set out to determine whether this product can serve as a vapour barrier and an air barrier system and whether it conformed to the intent of applicable building code requirements. In collaboration with NRC-IRC researchers, CCMC developed a testing protocol for its evaluation, which was based on laboratory testing requirements for vapour diffusion, air leakage control and durability., Article, Published.
On keeping secrets
Proceedings of the 2015 Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence in Austin, USA, 2015. Communication involves transferring information from one agent to another. An intelligent agent, either human or machine, is often able to choose to hide information in order to protect their interests. The notion of information hiding is closely linked to secrecy and dishonesty, but it also plays an important role in domains such as software engineering. In this paper, we consider the ethics of information hiding, particularly with respect to intelligent agents. In other words, we are concerned with situations that involve a human and an intelligent agent with access to different information. Is the intelligent agent justified in preventing a human user from accessing the information that they possess? This is trivially true in the case where access control systems exist. However, we are concerned with the situation where an intelligent agent is able to using a reasoning system to decide not to share information with all humans. On the other hand, we are also concerned with situations where humans hide information from machines. Are we ever under a moral obligation to share information with a computional agent? We argue that questions of this form are increasingly important now, as people are increasingly willing to divulge private information to machines with a great capacity to reason with that information and share it with others., Conference paper, Published.
On the representation and verification of cryptographic protocols in a theory of action
Proceedings of 2010 Eighth Annual International Conference on Privacy Security and Trust (PST) in Ottawa, ON, Canada, 17-19 Aug. 2010. Cryptographic protocols are usually specified in an informal, ad hoc language, with crucial elements, such as the protocol goal, left implicit. We suggest that this is one reason that such protocols are difficult to analyse, and are subject to subtle and nonintuitive attacks. We present an approach for formalising and analysing cryptographic protocols in a theory of action, specifically the situation calculus. Our thesis is that all aspects of a protocol must be explicitly specified. We provide a declarative specification of underlying assumptions and capabilities in the situation calculus. A protocol is translated into a sequence of actions to be executed by the principals, and a successful attack is an executable plan by an intruder that compromises the specified goal. Our prototype verification software takes a protocol specification, translates it into a high-level situation calculus (Golog) program, and outputs any attacks that can be found. We describe the structure and operation of our prototype software, and discuss performance issues., Conference paper, Published.
Optimal scaling of weight and waist circumference to height for adiposity and cardiovascular disease risk in individuals with spinal cord injury
Study Design: Observational cross-sectional study. Objectives: Body mass index (BMI), measured as a ratio of weight (Wt) to the square of height (Wt/Ht(2)), waist circumference (WC) and waist-to-height ratio (WHtR) are common surrogate measures of adiposity. It is not known whether alternate scaling powers for height might improve the relationships between these measures and indices of obesity or cardiovascular disease (CVD) risk in individuals with spinal cord injury (SCI). We aimed to estimate the values of 'x' that render Wt/Ht(x) and WC/Ht(x) maximally correlated with dual energy x-ray absorptiometry (DEXA) total and abdominal body fat and Framingham Cardiovascular Risk Scores. Setting: Canadian public research institution. Methods: We studied 27 subjects with traumatic SCI. Height, Wt and body fat measurements were determined from DEXA whole-body scans. WC measurements were also obtained, and individual Framingham Risk Scores were calculated. For values of 'x' ranging from 0.0 to 4.0, in increments of 0.1, correlations between Wt/Ht(x) and WC/Ht(x) with total and abdominal body fat (kg and percentages) and Framingham Risk Scores were computed. Results: We found that BMI was a poor predictor of CVD risk, regardless of the scaling factor. Moreover, BMI was strongly correlated with measures of obesity, and modification of the scaling factor from the standard (Wt/Ht(2)) is not recommended. WC was strongly correlated with both CVD risk and obesity, and standard measures (WC and WHtR) are of equal predictive power. Conclusion: On the basis of our findings from this sample, alterations in scaling powers may not be necessary in individuals with SCI; however, these findings should be validated in a larger cohort., Peer-reviewed article, Published. Received 25 February 2014; revised 1 May 2014; accepted 1 August 2014; published online 30 September 2014.
Optimization and single-laboratory validation of a method for the determination of flavonolignans in milk thistle seeds by high-performance liquid chromatography with ultraviolet detection
Seeds of milk thistle, Silybum marianum (L.) Gaertn., are used for treatment and prevention of liver disorders and were identified as a high priority ingredient requiring a validated analytical method. An AOAC International expert panel reviewed existing methods and made recommendations concerning method optimization prior to validation. A series of extraction and separation studies were undertaken on the selected method for determining flavonolignans from milk thistle seeds and finished products to address the review panel recommendations. Once optimized, a single-laboratory validation study was conducted. The method was assessed for repeatability, accuracy, selectivity, LOD, LOQ, analyte stability, and linearity. Flavonolignan content ranged from 1.40 to 52.86 % in raw materials and dry finished products and ranged from 36.16 to 1570.7 μg/mL in liquid tinctures. Repeatability for the individual flavonolignans in raw materials and finished products ranged from 1.03 to 9.88 % RSD, with HorRat values between 0.21 and 1.55. Calibration curves for all flavonolignan concentrations had correlation coefficients of >99.8 %. The LODs for the flavonolignans ranged from 0.20 to 0.48 μg/mL at 288 nm. Based on the results of this single-laboratory validation, this method is suitable for the quantitation of the six major flavonolignans in milk thistle raw materials and finished products, as well as multicomponent products containing dandelion, schizandra berry, and artichoke extracts. It is recommended that this method be adopted as First Action Official Method status by AOAC International., Peer-reviewed article, Published. Received 1 June 2015; Revised 14 July 2015; Accepted 16 July 2015; Published online 31 July 2015.
Ordinal conditional functions for nearly counterfactual revision
Proceedings of the 16th International Workshop on Non-Monotonic Reasoning (NMR 2016), Cape Town, South Africa; April 22-24, 2016. We are interested in belief revision involving conditional statements where the antecedent is almost certainly false. To represent such problems, we use Ordinal Conditional Functions that may take infinite values. We model belief change in this context through simple arithmetical operations that allow us to capture the intuition that certain antecedents can not be validated by any number of observations. We frame our approach as a form of finite belief improvement, and we propose a model of conditional belief revision in which only the "right" hypothetical levels of implausibility are revised., Conference paper, Published.
The path of the smart grid
Exciting yet challenging times lie ahead. The electrical power industry is undergoing rapid change. The rising cost of energy, the mass electrification of everyday life, and climate change are the major drivers that will determine the speed at which such transformations will occur. Regardless of how quickly various utilities embrace smart grid concepts, technologies, and systems, they all agree onthe inevitability of this massive transformation. It is a move that will not only affect their business processes but also their organization and technologies., Final article published
Pattern selection in plants
Background and Aims A study is made by computation of the interplay between the pattern formation of growth catalysts on a plant surface and the expansion of the surface to generate organismal shape. Consideration is made of the localization of morphogenetically active regions, and the occurrence within them of symmetry-breaking processes such as branching from an initially dome-shaped tip or meristem. Representation of a changing and growing three-dimensional shape is necessary, as two-dimensional work cannot distinguish, for example, formation of an annulus from dichotomous branching. Methods For the formation of patterns of chemical concentrations, the Brusselator reaction-diffusion model is used, applied on a hemispherical shell and generating patterns that initiate as surface spherical harmonics. The initial shape is hemispherical, represented as a mesh of triangles. These are combined into finite elements, each made up of all the triangles surrounding each node. Chemical pattern is converted into shape change by moving nodes outwards according to the concentration of growth catalyst at each, to relieve misfits caused by area increase of the finite element. New triangles are added to restore the refinement of the mesh in rapidly growing regions. Key Results The postulated mechanism successfully generates: tip growth (or stalk extension by an apical meristem) to ten times original hemisphere height; tip flattening and resumption of apical advance; and dichotomous branching and higher-order branching to make whorled structures. Control of the branching plane in successive dichotomous branchings is tackled with partial success and clarification of the issues. Conclusions The representation of a growing plant surface in computations by an expanding mesh that has no artefacts constraining changes of shape and symmetry has been achieved. It is shown that one type of pattern-forming mechanism, Turing-type reaction-diffusion, acting within a surface to pattern a growth catalyst, can generate some of the most important types of morphogenesis in plant development., Peer-reviewed article, Published. Received: 26 July 2007; Returned for revision: 5 October 2007; Accepted: 15 October 2007; Published electronically: 28 November 2007.
A performance comparison between FRC and WWM reinforced slabs on grade
Proceedings of 4th International Conference on Structural Health Monitoring of Intelligent Infrastructure (SHMII-4) 2009, 22-24 July 2009, Zurich, Switzerland. A comparative experimental study was conducted to investigate the effectiveness of fiber reinforcement as a non-corrosive alternative for welded-wire reinforcement in slabs on grade. Six full-scale slabs-on-grade, reinforced with various combinations of WWM (Welded Wire Mesh), polymeric macro-synthetic fibers (PMF) and cellulose fibers were tested under a centrally concentrated load. Their ductility and load carrying capacity were evaluated and compared. Based on the results of this study, it seems that high dosages of polymeric macrofibers can be used to successfully reinforce concrete slabs. Given that the use of PMF eliminates the possibility of corrosion of reinforcement, this may be a superior option. Furthermore, it seems low dosages of fibers act as an ineffective replacement for WWM. Low dosages of PMF and cellulose fiber when added on their own, or in combination with each other were found to be insufficient in providing sufficient ductility or load carrying capacity compared to the control slab when subjected to the load test. Slabs reinforced with cellulose fiber had a poor mechanical response in comparison to WWM and therefore cellulose fiber on its own is not recommended., Conference paper, Published.
Performance of sprayed fiber reinforced polymer strengthened timber beams
A study was carried out to investigate the use of Sprayed Fiber Reinforced Polymer (SFRP) for retrofit of timber beams. A total of 10-full scale specimens were tested. Two different timber preservatives and two different bonding agents were investigated. Strengthening was characterized using load deflection diagrams. Results indicate that it is possible to enhance load-carrying capacity and energy absorption characteristics using the technique of SFRP. Of the two types of preservatives investigated, the technique appears to be more effective for the case of creosote-treated specimens, where up to a 51% improvement in load-carrying capacity and a 460% increase in the energy absorption capacity were noted. Effectiveness of the bonding agent used was dependent on the type of preservative the specimen had been treated with., Peer-reviewed article, Published. Received 26 July 2010; Revised 8 October 2010; Accepted 12 October 2010.
Performance-risk analysis for the design of high-performance affordable homes
Proceedings of the 3rd Building Enclosure Science & Technology (BEST3) Conference, Atlanta, USA, April 2-4, 2014. Net-zero energy, emissions, and carbon sustainability targets for buildings are becoming achievable with the use of renewable energy technologies and high-performance construction, equipment, and appliances. Methodologies and tools have also been developed and tested to help design teams search for viable strategies for net-zero buildings during the early stages of design. However, the risks for underperformance of high-performance technologies, systems, and whole buildings are usually not assessed methodically. The negative consequences have been, often reluctantly, reported. This paper presents a methodology for explicitly considering and assessing underperformance risks during the design of high-performance buildings. The methodology is a first attempt to formalize extensive applied research and industry experiences in the quest for net-zero energy homes in the U.S., and build on existing tools and methods from performance-based design, as well as optimization, decision, and risk analysis. The methodology is knowledge driven and iterative in order to facilitate new knowledge acquired to be incorporated in the decision making. As a point of departure in the process, a clear definition of the project vision and a two-level organization of the corresponding building function performance objectives are laid out, with objectives further elaborated into high-performance targets and viable alternatives selected from the knowledge-base to meet these. Then, a knowledge guided search for optimized design strategies to meet the performance targets takes place, followed by a selection of optimized strategies to meet the objectives and the identification of associated risks from the knowledge-base. These risks are then evaluated, leading either to mitigation strategies or to changing targets and alternatives, and feeding back to the knowledge-base. A case study of affordable homes in hot humid climate is used to test the methodology and demonstrate its application. The case study clearly illustrates the advantages of using the methodology to minimize under performance risks. Further work will follow to develop the underpinning mathematical formalisms of the knowledge base and the risk evaluation procedure., Conference paper, Published.
Phase change material's (PCM) impacts on the energy performance and thermal comfort of buildings in a mild climate
The current residential buildings are of light weight construction. As such, they tend to frequent indoor air temperatures fluctuations and have been proven detrimental for thermal comfort and mechanical system energy consumption. This is reflected in the energy consumption statistics for residential buildings. More than 62% of the building energy use is towards maintaining comfortable indoor conditions. Phase change materials (PCM); a latent heat thermal storage material, have the potential to increase the thermal mass of these buildings without drastically affecting the current construction techniques. In this paper, the potential of phase change materials is investigated through numerical and experimental studies. The field experimental study is conducted using twin side-by-side buildings exposed to the same interior and exterior boundary conditions, and EnergyPlus, after being benchmarked with the experimental results, is used for the numerical study. The numerical study is carried out for an existing residential apartment unit with particular emphasis on the effects of different design parameters such as orientation and window to wall ratio. Preliminary analyses of experimental data show that phase change materials are effective in stabilizing the indoor air by reversing the heat flow direction. In fact, the indoor air and wall temperature fluctuations are reduced by 1.4 °C and 2.7 °C respectively. Following, benchmarking of the numerical simulation shows confidence levels in predicting the interior conditions since discrepancies between experimental data and numerical data are within tolerance limits of the measuring device. Further, from the analysis of the numerical data, phase change material is effective in moderating the operative temperature but does not translate to significant thermal comfort improvement when evaluated over a night time occupancy regime in the summer. On the contrary, PCM is effective in lowering the heating energy demand by up to 57% during the winter condition., Peer reviewed article, Published. Received 1 October 2015, Revised 22 January 2016, Accepted 23 January 2016, Available online 29 January 2016.
A pilot scale comparison of the effects of chemical pre-treatments of wood chips on the properties of low consistency refined TMP
Proceedings of the International Mechanical Pulping Conference 2016, IMPC 2016. After decades of research and development, the technology of thermomechanical pulping (TMP) has dramatically improved resulting in higher pulp quality, especially strength. However, the TMP industry is still faced with the challenge of continually increasing energy costs. One approach to reducing the energy costs is to replace the second-stage high consistency (HC) refiner with several low consistency (LC) refiners. This is based on the observation that low consistency refining is more energy efficient than high consistency refining. The limitation of LC refining is loss of paper strength due to the high frequency of fibre cutting especially at high refining intensity. Chemical treatment combined with low consistency refining provides opportunity for even further energy savings. The chemical treatment could improve pulp properties allowing for further energy reduction in the HC refining stage or reduced intensity during LC refining resulting in less fibre cutting. Indeed, it is also possible that the chemical treatment itself will improve the resistance of the fibre to the cutting during LC refining., Conference paper, Published.

Pages