Proceedings of the 3rd Building Enclosure Science & Technology (BEST3) Conference, Atlanta, USA, April 2-4, 2014. Net-zero energy, emissions, and carbon sustainability targets for buildings are becoming achievable with the use of renewable energy technologies and high-performance construction, equipment, and appliances. Methodologies and tools have also been developed and tested to help design teams search for viable strategies for net-zero buildings during the early stages of design. However, the risks for underperformance of high-performance technologies, systems, and whole buildings are usually not assessed methodically. The negative consequences have been, often reluctantly, reported. This paper presents a methodology for explicitly considering and assessing underperformance risks during the design of high-performance buildings. The methodology is a first attempt to formalize extensive applied research and industry experiences in the quest for net-zero energy homes in the U.S., and build on existing tools and methods from performance-based design, as well as optimization, decision, and risk analysis. The methodology is knowledge driven and iterative in order to facilitate new knowledge acquired to be incorporated in the decision making. As a point of departure in the process, a clear definition of the project vision and a two-level organization of the corresponding building function performance objectives are laid out, with objectives further elaborated into high-performance targets and viable alternatives selected from the knowledge-base to meet these. Then, a knowledge guided search for optimized design strategies to meet the performance targets takes place, followed by a selection of optimized strategies to meet the objectives and the identification of associated risks from the knowledge-base. These risks are then evaluated, leading either to mitigation strategies or to changing targets and alternatives, and feeding back to the knowledge-base. A case study of affordable homes in hot humid climate is used to test the methodology and demonstrate its application. The case study clearly illustrates the advantages of using the methodology to minimize under performance risks. Further work will follow to develop the underpinning mathematical formalisms of the knowledge base and the risk evaluation procedure., Conference paper, Published.
Proceedings of the International Mechanical Pulping Conference 2016, IMPC 2016. After decades of research and development, the technology of thermomechanical pulping (TMP) has dramatically improved resulting in higher pulp quality, especially strength. However, the TMP industry is still faced with the challenge of continually increasing energy costs. One approach to reducing the energy costs is to replace the second-stage high consistency (HC) refiner with several low consistency (LC) refiners. This is based on the observation that low consistency refining is more energy efficient than high consistency refining. The limitation of LC refining is loss of paper strength due to the high frequency of fibre cutting especially at high refining intensity. Chemical treatment combined with low consistency refining provides opportunity for even further energy savings. The chemical treatment could improve pulp properties allowing for further energy reduction in the HC refining stage or reduced intensity during LC refining resulting in less fibre cutting. Indeed, it is also possible that the chemical treatment itself will improve the resistance of the fibre to the cutting during LC refining., Conference paper, Published.
Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education. The ability to predict student performance in a course or program creates opportunities to improve educational outcomes. With effective performance prediction approaches, instructors can allocate resources and instruction more accurately. Research in this area seeks to identify features that can be used to make predictions, to identify algorithms that can improve predictions, and to quantify aspects of student performance. Moreover, research in predicting student performance seeks to determine interrelated features and to identify the underlying reasons why certain features work better than others. This working group report presents a systematic literature review of work in the area of predicting student performance. Our analysis shows a clearly increasing amount of research in this area, as well as an increasing variety of techniques used. At the same time, the review uncovered a number of issues with research quality that drives a need for the community to provide more detailed reporting of methods and results and to increase efforts to validate and replicate work., Peer reviewed, Conference paper, Published.
Proceedings of IEEE Canadian Conference on Electrical and Computer Engineering (CCECE2014),May 2014, Toronto, Canada. Smart Grid functions such as Advanced Metering Infrastructure, Pervasive Control and Distribution Management Systems have brought numerous control and optimization opportunities for distribution networks through more accurate and reliable techniques. This paper presents a new predictive approach for Volt/VAr Optimization (VVO) of smart distribution systems using Neural Networks (NN) and Genetic Algorithm (GA). The proposed predictive algorithm is capable of predicting the load profile of target nodes a day ahead by employing the historical metrology data of Smart Meters, It can further perform a comprehensive VVO in order to minimize distribution network loss/operating costs and run Conservation Voltage Reduction (CVR) to conserve more energy. To test the merits of the proposed algorithm, British Columbia Institute of Technology north campus distribution grid is used as research case study., Conference paper, Published.
14th Canadian Conference on Building Science and Technology, Toronto, Canada, October 29th-30th, 2014.
A field study is presented here on the investigation of the correlation between wind-driven rain (WDR) as the driving force and the relative proportions of water penetration at intended defects (openings) located at the interface of windows and exterior walls. In this field study, eight full-scale exterior-wall panels of vinyl siding and stucco claddings were built and installed on a field testing station, which is subjected to British Columbia’s west coast climate rain. This paper focuses on the preliminary results from one of the stucco wall panels with a discontinuity in the sealant around the perimeters of the windows. The water passing through this defect was collected and measured. The instantaneous and automatic water collection measurements were synchronized to the data gathered by a nearby weather station on wind-driven rain intensity, wind speed and direction. In addition, rain gauges on exterior of walls collected the wind-driven rain against each façade of the test station. Compared to previous computer simulations and laboratory experimental studies on rain penetration through exterior walls, this study was conducted under more realistic conditions. The panels were subjected to real wind-driven rain events. Also collectively, the experiment took into account rain that splashed off the wall façade upon impact and the rain water around the defect location due to run-off. The study is ongoing. However, when complete, the results from this study will be useful for fine-tuning the principal moisture load that is applied in hygrothermal performance assessment and design of exterior wall systems., Conference paper, Published.
This working group asserts that Program Comprehension (PC) plays a critical part in the writing process. For example, this abstract is written from a basic draft that we have edited and revised until it clearly presents our idea. Similarly, a program is written in an incremental manner, with each step being tested, debugged and extended until the program achieves its goal. Novice programmers should develop their program comprehension as they learn to code, so that they are able to read and reason about code while they are writing it. To foster such competencies our group has identified two main goals: (1) to collect and define learning activities that explicitly cover key components of program comprehension and (2) to define possible learning trajectories that will guide teachers using those learning activities in their CS0/CS1 or K-12 courses.
We plan to achieve these goals as follows: Step 1 Review the current state of research and development by analyzing literature on classroom activities that improve program comprehension. Step 2 Concurrently, survey lecturers at various institutions on their use of workshop activities to foster PC. Step 3 Use the outputs from both activities to define and conceptualize what is meant by PC in the context of novice programmers. Step 4 Catalog learning activities with regard to their prerequisites, intended learning outcomes and additional special characteristics. Step 5 Catalog learning activities with regard to their prerequisites, intended learning outcomes and additional special characteristics. Step 6 Develop a map of learning activities and thereby also models of probable learning trajectories., Not peer reviewed, Conference proceedings
This working group asserts that Program Comprehension (PC) plays a critical part in the writing process. For example, this abstract is written from a basic draft that we have edited and revised until it clearly presents our idea. Similarly, a program is written in an incremental manner, with each step being tested, debugged and extended until the program achieves its goal. Novice programmers should develop their program comprehension as they learn to code, so that they are able to read and reason about code while they are writing it. To foster such competencies our group has identified two main goals: (1) to collect and define learning activities that explicitly cover key components of program comprehension and (2) to define possible learning trajectories that will guide teachers using those learning activities in their CS0/CS1 or K-12 courses.
We plan to achieve these goals as follows: Step 1 Review the current state of research and development by analyzing literature on classroom activities that improve program comprehension. Step 2 Concurrently, survey lecturers at various institutions on their use of workshop activities to foster PC. Step 3 Use the outputs from both activities to define and conceptualize what is meant by PC in the context of novice programmers. Step 4 Catalog learning activities with regard to their prerequisites, intended learning outcomes and additional special characteristics. Step 5 Catalog learning activities with regard to their prerequisites, intended learning outcomes and additional special characteristics. Step 6 Develop a map of learning activities and thereby also models of probable learning trajectories., Not peer reviewed, Conference proceedings
A monoclonal antibody, named CAMAL-1, was raised previously in our laboratory to a common antigen of acute myeloid leukemia (CAMAL), and was shown to be highly specific in its recognition of cells from patients with acute (AML) or chronic (CML) myelogenous leukemia. CAMAL was also reported to be prognostic of disease, in that patients whose numbers of CAMAL-1 reactive cells were high, or rose over time, had poorer prognoses and shorter survival times than patients whose CAMAL values were low or decreased. This correlation between CAMAL and disease prognosis led to the discovery that CAMAL-1immunoaffinity-purified leukemic cellular lysates contained a selective growth inhibitory activity for normal myeloid progenitor cells, since the growth of CML progenitors was not inhibited. The work described in this thesis focused primarily on the purification and characterization of the myelopoietic activity present in the CAMAL preparations, and its relationship to the leukemic marker (CAMAL). Initial purifications involved CAMAL-1immunoaffinity chromatography of leukemic cellular lysates, followed by FPLC molecular size fractionation and/or preparative SDS-PAGE. The myelopoietic activity was located within a30-35 kDa molecular weight fraction (P30), and the P30 fraction was consistently found to be selective in its inhibition of normal myeloid progenitors, since the growth of CML progenitors was not inhibited but was, in fact, stimulated. Antibodies were raised to P30 and used in the subsequent purification and characterization of the myelopoietic activity. Amino acid sequence analysis of the N-terminus and P30 tryptic peptides strongly suggested that P30 belonged to the serine protease family of enzymes, and the results obtained from protease assays indicated thatP30 preparations did possess enzyme activity. Prior to the completion of P30 molecular cloning experiments, however, the cDNA sequence for azurocidin/CAP37 was reported, and its predicted amino acid sequence was found to be identical to those obtained from the P30 protein samples. Azurocidin is a proteolytically inactive serine protease homologue, normally present in neutrophilic granules. Purifiedazurocidin did not possess inhibitory activity in normal progenitor cell assays; therefore, in order to isolate the biologic activity from azurocidin and other potentially contaminating proteins, P30 preparations were fractionated by reverse phase HPLC. The rpHPLC profiles were found to be similar to those reported for neutrophilic granules; however, the myelopoietica ctivity was obtained in a single rpHPLC fraction that aligned with the front portion of the azurocidin protein peak. Two dimensional isoelectric focusing/SDS-PAGE analysis of the biologically active rpHPLC fraction confirmed that it contained azurocidin, and no additional protein species were detected. Only the earlier eluting azurocidin rpHPLC fraction mediated the myelopoietic activity, and this fraction was also enriched in the higher molecular weight isoforms of azurocidin. Therefore, it appeared that a variably glycosylated isoform of azurocidin was mediating the biologic effects on myeloid progenitor cells, and because azurocidin obtained from normal neutrophils did not possess the myelopoietic activity, we speculate that the bioactive isoform of azurocidin is present in relatively higher amounts and/or is uniquely synthesized by leukemic cells., Thesis, Published.
Proceedings of the 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), in Seogwipo, South Korea, 11-15 July 2017. In general, manual wheelchairs are designed with a fixed frame, which is not optimal for every situation. Adjustable on the fly seating allow users to rapidly adapt their wheelchair configuration to suit different tasks. These changes move the center of gravity (CoG) of the system, altering the wheelchair stability and maneuverability. To assess these changes, a computer simulation of a manual wheelchair was created with adjustable seat, backrest, rear axle position and user position, and validated with experimental testing. The stability of the wheelchair was most affected by the position of the rear axle, but adjustments to the backrest and seat angles also result in stability improvements that could be used when wheeling in the community. These findings describe the most influential parameters for wheelchair stability and maneuverability, as well as provide quantitative guidelines for the use of manual wheelchairs with on the fly adjustable seats., Conference paper, Published.
This paper aims to investigate quasi real-time ZIP load models for new Smart Grid-based Volt-VAR Optimization (VVO) techniques. As recent VVO solutions are able to perform in quasi real-time using Advanced Metering Infrastructure (AMI) data, more accurate load modeling could give distribution network operators and/or planners more precise Conservation Voltage Regulation (CVR) and energy saving values at each operating time stage. Furthermore, more accurate load modeling of each quasi real-time stage could improve VVO efficiency. As type, amount and operating time of each residential appliance varies throughout a day, this paper aims to discover ZIP load model of each quasi real-time stage separately through disaggregated data (i.e. decomposing residential load consumption into home appliance consumptions). This paper shows that the energy conservation achieved by CVR operation through presented quasi real-time ZIP load modeling could lead AMI-based VVO solutions to higher level of accuracy and data resolution compared with conventional techniques. Therefore, this paper primarily introduces a new quasi real-time AMI-based VVO engine. Then, it investigates ZIP load model of each quasi real-time stage through statistical data to conserve energy consumption. To check the authenticity and the applicability of presented model in a whole system, 33-node distribution feeder is employed., Published. Received 16 March 2015, Revised 14 May 2015, Accepted 3 June 2015, Available online 2 July 2015.
Proceedings of the 6th International Conference on Agents and Artificial Intelligence in Angers, France, 2014. In this paper, we explore the use of ranking functions in reasoning about belief change. It is well-known that the semantics of belief revision can be defined either through total pre-orders or through ranking functions over states. While both approaches have similar expressive power with respect to single-shot belief revision, we argue that ranking functions provide distinct advantages at both the theoretical level and the practical level, particularly when actions are introduced. We demonstrate that belief revision induces a natural algebra over ranking functions, which treats belief states and observations in the same manner. When we introduce belief progression due to actions, we show that many natural domains can be easily represented with suitable ranking functions. Our formal framework uses ranking functions to represent belief revision and belief progression in a uniform manner; we demonstrate the power of our approach through formal results, as well as a series of natural problems in commonsense reasoning., Conference paper, Published.
Proceedings from CIGRÉ Canada Conference, Montreal, Sept. 2012. In recent decade, smart microgrids have raised the feasibility and affordability of adaptive and real-time Volt/VAr optimization (VVO) and Conservation Voltage Reduction (CVR) implementations by their exclusive features such as using smart metering technologies and various types of dispersed generations. Smart distribution networks are presently capable of achieving higher degrees of efficiency and reliability through employing a new integrated Volt/VAr optimization system. For VVO application, two well-known approaches are recommended by different utilities and/or companies: Centralized VVO and Decentralized VVO. In centralized VVO, the processing system is placed in a central controller unit such as DMS in the so called “Utility Back Office”. The DMS uses relevant measurements taken from termination points (i.e. utility subscribers) supplied to it from either field collectors or directly from MDMS, to determine the best possible settings for field-bound VVO/CVR assets to achieve the desired optimization and conservation targets. These settings are then off-loaded to such assets through existing downstream pipes, such as SCADA network In contrast, decentralized VVO utilizes VVO/CVR engines which are located in the field and in close-proximity to the relevant assets to conserve voltage and energy according to local attributes of the distribution network. In this case, local measurements do not need to travel from the field to the back-office, and the new settings for VVO/CVR assets are determined locally, rather than from a centralized controller. Without having any preference between above mentioned VVO techniques, this paper studies an adaptive optimization engine for real-time VVO/CVR in smart microgrids based on Intelligent Agent technology. The optimization algorithm provides the best optimal solution for VVO/CVR problem at each real-time stage through minimizing system loss cost and improves system energy efficiency as well as voltage profile of the relevant distribution system. The algorithm may employ distributed generation sources to address the Volt/VAr optimization problem in real-time. Coordinated VVO/CVR requires real-time data analysis every 15 minutes. It utilizes a distributed command and control architecture to supply the VVO Engine (VVOE) with the required data, and secures real-time configuration from the VVO engine for the VVO control devices such as On-Load Tap Changers (OLTCs), Voltage Regulators (VRs) and Capacitor Banks (CBs). It also has the option of employing distributed generation (DG) as well as modelling load effects in VVO/CVR application. The algorithm minimizes the distribution network power loss cost at each time stage, checks the voltage deviation of distribution buses and distributed generation sources considering different types of constraints such as system power flow, distribution network power factor, system active and reactive power constraints and switching limitations of Volt/VAr control devices. The algorithm receives required real-time data from an intelligent agent. Then, it starts to solve the real-time VVO/CVR problem in order to find the best optimal configuration of the network in real-time. The paper uses British Columbia Institute of Technology (BCIT) distribution network as its case study in order to explore the effectiveness and the accuracy of the optimization engine. Moreover, the VVO/CVR optimization algorithm is implemented in different configurations; a) VVO/CVR confined to the substation and b) VVO/CVR optimization algorithm within the substation and along distribution feeders. The algorithm also checks the availability of DGs to assist VVO/CVR control functions and assesses the impact of new distributed sources such as: Flywheel Energy Storage System (FESS) on real-time VVO/CVR. For this reason, the algorithm classified DGs in a microgrid based on their impacts and instantiates them based on their application feasibility for real-time VVO/CVR., Conference paper, Published.