This page is the temporary home for the CSDL Tech Report library index. See csdl-trs.bib for a bibtex file containing citations to all of these papers.

2020

Philip M. Johnson, Carleton Moore, Peter Leong, and Seungoh Paek. Radgrad: Removing the 'extra' from extracurricular to improve student engagement, retention, and diversity. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education (SIGCSE 2020), March 2020. [ .pdf ]

RadGrad is a curriculum initiative implemented via a web-based application that combines features of social networks, degree planners, and serious games. RadGrad redefines the traditional meaning of “progress” and “success” in the undergraduate computer science degree program, with the ultimate goal of improving student engagement, diversity, and retention. In this paper, we relate RadGrad to other curriculum initiatives, overview its key functionality, present results from an evaluation conducted during its first year of deployment, and discuss our lessons learned and future directions.

Anthony J. Christe. LAHA: A framework for adaptive optimization of distributed sensor frameworks. PhD thesis, University of Hawaii, Department of Information and Computer Sciences, May 2020. [ .pdf ]

Distributed Sensor Networks (DSNs) face a myriad of technical challenges. This dissertation examines two important DSN challenges. One problem is converting “primitive" sensor data into actionable products and insights. For example, a DSN for power quality (PQ) might gather primitive data in the form of raw voltage waveforms and produce actionable insights in the form of the ability to predict when PQ events are going to occur by observing cyclical data. For another example, a DSN for infrasound might gather primitive data in the form of microphone counts and produce actionable insight in the form of determining what, when, and where the signal came from. To make progress towards this problem, DSNs typically implement one or more of the following strategies: detecting signals in the primitive data (deciding if something is there), classification of signals from primitive data (deciding what is there), and localization of signals (when and from where did the signals come). Further, DSNs make progress towards this problem by forming relationships between primitive data by finding correlations between spatial attributes, temporal attributes, and by associating metadata with primitive data to provide contextual information not collected by the DSN. These strategies can be employed recursively. As an example, the result of aggregating typed primitive data provides a new higher level of typed data which contains more context than the data from which is was derived from. This new typed data can itself be aggregated into new, higher level types and also participate in relationships. A second important challenge is managing data volume. Most DSNs produce large amounts of (increasingly multimodal) primitive data, of which only a tiny fraction (the signals) is actually interesting and useful. The DSN can utilize one of two strategies: keep all of the information and primitive data forever, or employ some sort of strategy for systematically discarding (hopefully uninteresting and not useful) data. As sensor networks scale in size, the first strategy becomes unfeasible. Therefore, DSNs must find and implement a strategy for managing large amounts of sensor data. The difficult part is finding an effective and efficient strategy deciding what data is interesting and must be kept and what data to discard. This dissertation investigates the design, implementation, and evaluation of the Laha framework, which provides new insight into both of these problems. First, the Laha framework provides a multi-leveled representation for structuring and processing DSN data. The structure and processing at each level is designed with the explicit goal of turning low-level data into actionable insights. Second, each level in the framework implements a “time-to-live" (TTL) strategy for data within the level. This strategy states that data must either “progress" upwards through the levels towards more abstract, useful representations within a fixed time window, or be discarded and lost forever. The TTL strategy is useful because when implemented, it allows DSN designers to calculate upper bounds on data storage at each level of the framework and supports graceful degradation of DSN performance. There are several smaller, but still important problems that exist within the context of these two larger problems. Examples of the smaller problems that Laha hopes to overcome in transit to the larger goals include optimization of triggering, detection, and classification, building a model of sensing field topology, optimizing sensor energy use, optimizing bandwidth, and providing predictive analytics for DSNs. Laha provides four contributions to the area of DSNs. First, the Laha design, a novel abstract distributed sensor network that provides useful properties relating to data management. Second, an evaluation of the Laha abstract framework through the deployment of two Laha-compliant reference implementations, validated data collection, and several experiments that are used to either confirm or deny the benefits touted by Laha. Third, two Laha-compliant reference implementations, OPQ and Lokahi, which can be used to form DSNs for the collection of distributed power quality signals and the distributed collection of infrasound signals. Fourth, a set of implications for modern distributed sensor networks as a result of the evaluation of Laha. The major claim of this dissertation is that the Laha Framework provides a generally useful representation for real-time high-volume DSNs that address several major issues that modern DSNs face.

Sergey Negrashov. Design, Implementation, and Evaluation of Napali: A novel distributed sensor network for improved power quality monitoring. PhD thesis, University of Hawaii, Department of Information and Computer Sciences, May 2020. [ .pdf ]

Today's big data world heavily relies upon providing precise, timely, and actionable intelligence, while being burdened by the ever increasing need for data cleaning and preprocessing. While in the case of ingesting large quantity of unstructured data this problem is unavoidable, when it comes to sensor networks built for a specific purpose, such as anomaly detection, some of that computation can be moved to the edge of the network. This thesis concerns the special case of sensor networks tailored for monitoring the power grid for anomalous behavior. These networks monitor power delivery infrastructure with the intent of finding deviations from the nominal steady state, across multiple geographical locations. Aforementioned deviations, known as power quality anomalies, may originate, and be localized to the location of the sensor, or may affect a sizable portion of the power grid. The difficulty of evaluating the extent of a power quality anomaly stems directly from their short temporal and variable geographical impact. I present a novel distributed power quality monitoring system called Napali which relies on extracted metrics from individual meters and their temporal locality in order to intelligently detect anomalies and extract raw data within temporal window and geographical areas of interest. The claims of this thesis are that Napali outperforms existing power quality monitoring gridwide event detection methods in resource utilization and sensitivity. Furthermore, Napali residential monitoring is capable of power grid monitoring without deployment on the high voltage transmission lines. Final claim of this thesis is that Napali capability of extracting portions of the events which did not pass the critical thresholds used in other detection methods allows for better localization of power quality disturbances. Napali claim validation was performed through deployment at the University of Hawaii. Fifteen OPQ Box devices, designed specifically to operate with Napali were located in various locations on campus. Data collected from these monitors was compared with smart meters already deployed across the University. Additionally, Napali was compared with standard methods of power quality event detection running along side the Napali systems. Napali methodology outperformed the standard methods of power quality monitoring in resource consumption, event quality and sensitivity. Additionally, I was able to validate that residential utility monitoring is capable of event detection and localization without monitoring higher levels of the power grid hierarchy. Finally, as a demonstration of Napali capabilities, I showed how data collected by my framework can be used to partition the power delivery infrastructure without prior knowledge of the power grid topology.

2019

Charles Dickens, Anthony J. Christe, and Philip M. Johnson. A transient classification system implementation on an open source distributed power quality network. In Proceedings of the Ninth International Conference on Smart Grids, Green Communications and IT Energy-aware Technologies, Athens, Greece, June 2019. [ .pdf ]

Capturing and classifying power quality phenomena is important for the smooth functioning of electrical grids. This paper presents methods for classifying the four types of transients (impulsive, arcing, oscillatory, and periodic notching) specified in the IEEE 1159 Power Quality standard. Our methods implement a tractable algorithm, which applies well understood signal processing methods and statistical inference for feature extraction and decision making. We tested our methods on simulated power quality disturbances in order to demonstrate the capabilities of the system. The results of this research include an operational implementation of a transient classifier for Open Power Quality, an open source distributed power quality network. Additional functionality can be easily incorporated into the system to extend the utility of our methods, such as a meta-analysis to capture higher level network wide events.

Philip M. Johnson. Design and evaluation of an athletic approach to software engineering education. ACM Transactions on Computing Education, August 2019. [ .pdf ]

Modern web application development provides an attractive application area for introductory software engineering education, as students have direct experience with the domain and it provides them with the potential to gain practical, real-world skills. Achieving this potential requires the development of competency with a multiple component tech stack for web application development, which is challenging to acquire within a single semester. In this research, we designed, implemented, and evaluated a new pedagogy called “athletic software engineering” which is intended to help students efficiently and effectively acquire competency with a multiple component tech stack as a precursor to a web application development project. We evaluated the pedagogy over 4 years and six semesters with 286 students and found strong evidence for its effectiveness.

2018

Philip M. Johnson, Carleton A. Moore, Peter Leong, and Seungoh Paek. DEP/RadGrad: Enhancing individualized learning plans and communities of practice to improve engagement, retention, and diversity in undergraduate computer science education. Technical Report CSDL-18-01, University of Hawaii, Honolulu, HI, January 2018. [ .pdf ]

The fundamental idea in this proposal is to provide students, faculty, and advisors with an alternative perspective on the undergraduate degree program—which traditionally boils down to a single kind of activity (coursework) and a single metric for success (grade point average). Our alternative perspective is called the Degree Experience, and it gives first class status to both curricular activities (courses) and extracurricular activities (discipline-oriented events, activities, clubs, etc.) To establish the first class status of extracurricular activities, the Degree Experience perspective replaces GPA as the single metric for success with a three component metric called ICE that assesses student development with respect to Innovation, Competency, and Experience. Each student’s Degree Experience also includes a representation of their disciplinary interests and career goals that helps them assess the relevance of potential curricular and extracurricular activities. Over the past two years, we have developed this idea into a conceptual framework called Degree Experience Plans (DEP) and a supporting technology platform called RadGrad. The design of DEP/RadGrad is influenced by research on diversity and retention and two educational research theories: Individualized Learning Plans (ILP) and Communities of Practice (CoP). ILPs help students connect their current studies to their future career goals. CoP identifies the importance of practitioner networks for both formal and informal learning. Based upon this prior research, and our pilot use of DEP/RadGrad with a small set of undergraduate students, we hypothesize that student populations adopting the Degree Experience perspective will show increased levels of engagement, retention, and diversity.

Anthony J. Christe. Laha: A framework for adaptive optimization of distributed sensor networks. Technical Report CSDL-18-02, University of Hawaii, Honolulu, HI, November 2018. [ .pdf ]

Distributed Sensor Networks (DSNs) are faced with a myriad of technical challenges. This dissertation examines two important DSN challenges. One problem that is apparent in any DSN is converting “primitive” sensor data into actionable products and insights. For example, a DSN for power quality (PQ) might gather primitive data in the form of raw voltage waveforms and produce actionable insights in the form of classified power quality events such as voltage sags or frequency swells or provide the ability to predict when PQ events are going to occur by observing cyclical data. For another example, a DSN for infrasound might gather primitive data in the form of microphone counts and produce actionable insight in the form of determining what, when, and where the signal came from. To make progress towards this problem, DSNs typically implement one or more of the following strategies: detecting signals in the primitive data (deciding if something is there), classification of signals from primitive data (deciding what is there), localization of signals (when and where did the signals come from), and by forming relationships between primitive data by finding correlations between spatial attributes, temporal attributes, and by associating metadata with primitive data to provide contextual information not collected by the DSN. These strategies can be employed recursively. As an example, the result of aggregating typed primitive data provides a new higher level of types data which contains more context than the data from which is was derived from. This new typed data can itself be aggregated into new, higher level types and also participate in relationships. A second important challenge is managing data volume. Most DSNs produce large amounts of (increasingly multimodal) primitive data, of which only a tiny fraction (the signals) is actually interesting and useful. The DSN can either utilize one of two strategies: keep all of the information and primitive data forever, or employ some sort of strategy for systematically discarding (hopefully uninteresting and not useful) data. As sensor networks scale in size, the first strategy becomes unfeasible. Therefore, DSNs must find and implement a strategy for managing large amounts of sensor data. The difficult part is finding an effective and efficient strategy deciding what data is interesting and must be kept and what data to discard. This dissertation investigates the design, implementation, and evaluation of the Laha framework, which is intended to address both of these problems. First, the Laha framework provides a multi-leveled representation for structuring and processing DSN data. The structure and processing at each level is designed with the explicit goal of turning low-level data into actionable insights. Second, each level in the framework implements a “time-to-live” (TTL) strategy for data within the level. This strategy states that data must either “progress” upwards through the levels towards more abstract, useful representations within a fixed time window, or be discarded and lost forever. The TTL strategy is interesting because when implemented, it allows DSN designers to calculate upper bounds on data storage at each level of the framework and supports graceful degradation of DSN performance.

Serge Negrashov. Design, implementation, and evaluation of napali: A novel distributed sensor network for improved power quality monitoring. Technical Report CSDL-18-03, University of Hawaii, Honolulu, HI, November 2018. [ .pdf ]

Today’s big data world is heavily relied on to bring precise, timely, and actionable intelligence, while being burdened by the ever increasing need for data cleaning and preprocessing. While in the case of ingesting large quantity of unstructured data this problem is unavoidable, when it comes to sensor networks built for a specific purpose, such as anomaly detection, some of that computation can be moved to the edge of the network. This thesis concerns the special case of sensor networks tailored for monitoring the power grid for anomalous behavior. These networks consist of meters connected to the grid across multiple geographically separated locations, while monitoring the power delivery infrastructure with the intent of finding deviations from the nominal steady state. These deviations, known as power quality anomalies, may originate, and be localized to the location of the sensor, or may affect a sizable portion of the power grid. The difficulty of evaluating the extent of a power quality anomaly stems directly from their short temporal and variable geographical impact. I propose a novel distributed power quality monitoring system called Napali which relies on extracted metrics from individual meters and their temporal locality in order to intelligently detect anomalies and extract raw data within temporal window and geographical areas of interest. The results of this research should be useful in other disciplines, such as general sensor network applications, IOT, and intrusion detection systems.

2017

Anthony J. Christe. Data management for distributed sensor networks: A literature review. Technical Report CSDL-17-01, University of Hawaii, Honolulu, HI, March 2017. [ .pdf ]

Sensor networks are spatially distributed autonomous sensors that monitor the physical world around them and often communicate those reading over a network to a server or servers. Sensor networks can benefit from the generally “unlimited resources” of the cloud, namely processing, storage, and network resources. This literature review surveys the major components of distributed data management, namely, cloud computing, distributed persistence models, and distributed analytics.

Sergey Negrashov. Compression and compressed sensing in bandwidth constrained sensor networks. Technical Report CSDL-17-02, University of Hawaii, Honolulu, HI, March 2017. [ .pdf ]

Improvements in sensor and radio technologies allow for creation of cheap sensors interconnected via radio links and the Internet. These advancements opened the door for creation of large area autonomous monitoring networks referred to as sensor networks. Bandwidth requirements of a wireless sensor network has a direct effect on its performance. This paper describes three bandwidth reduction methods: lossless compression, lossy compression, and compressed sensing.

Anthony J. Christe, Sergey Negrashov, Philip M. Johnson, Dylan Nakahodo, David Badke, and David Aghalarpour. Opq version 2: An architecture for distributed, real-time, high performance power data acquisition, analysis, and visualization. In Proceedings of the Seventh Annual IEEE International Conference on CYBER Technology in Automation, Control, and Intelligent Systems, Honolulu, HI, USA, July 2017. [ .pdf ]

OpenPowerQuality (OPQ) is a framework that supports end-to-end capture, analysis, and visualizations of distributed real-time power quality (PQ) data. Version 2 of OPQ builds on version 1 by providing higher sampling rates, optional battery backup, end-to-end security, GPS synchronization, pluggable analysis, and a real-time visualization framework. OPQ provides real-time distributed power measurements which allows users to see both local PQ events and grid-wide PQ events. The OPQ project has three principal components: back-end hardware for making power measurements, middleware for data acquisition and analysis, and a front-end providing visualizations. OPQBox2 is a hardware platform that takes PQ measurements, provides onboard analysis, and securely transfers data to our middleware. The OPQ middleware performs filtering on the OPQBox2 sensor data and performs high-level PQ analysis. The results of our PQ analysis and real-time state of the sensor network are displayed using OPQView. We’ve collected distributed PQ datafrom locations across Oahu, Hawaii and have demonstrated our ability to detect both local and grid-wide power quality events.

Pavel Senin, Jessica Lin, Xing Wang, Tim Oates, Sunil Gandhi, Arnold Boedihardjo, Crystal Chen, and Susan Frankenstein. Grammarviz 3.0: Interactive discovery of variable-length time series patterns. ACM Transactions on Knowledge Discovery from Data, March 2017. [ .pdf ]

The problems of recurrent and anomalous pattern discovery in time series, e.g. motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. Addressing these issues, we have developed algorithms for variable length time series pattern discovery that are based on symbolic discretization and grammar inference – two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work we present GrammarViz 3.0 – a software package that provides implementations of proposed algorithms and GUI for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.

Amy M. Takayesu. RADGRAD: Using degree planning, social networking, and gamification to improve academic, professional, and social engagement during the undergraduate computer science degree experience. Technical Report CSDL-17-05, University of Hawaii, Honolulu, HI, July 2017. [ .pdf ]

A casual analysis of the Hawaii technology community site, TechHui, suggests that over the past decade, recent alumni and current undergraduates of the Information and Computer Science (ICS) program at the University of Hawaii at Manoa (UHM) have experienced several problems with various academic, professional, and social aspects of their ICS experience. Existing degree planning systems such as STAR, Starfish by Hobsons, Blackboard Planner and Coursicle fail to provide the specific support that ICS students need to create a complete and comprehensive degree plan. Existing academic social networks such as LinkedIn, TechHui and Rate My Professors fail to connect students closely with professors and alumni. Current popular video games suggest that several gamification features could encourage ICS students to achieve higher goals. A new system called RadGrad combines degree planning, social networking, and gamification in a novel way that aims to give ICS undergraduates the support they need to succeed and redefines what it means to have a successful degree experience. The overall goal of this thesis is to justify the initial RadGrad system design and establish baseline values for future studies. A baseline student survey conducted in Spring 2017 reveals current and more detailed student perceptions on the academic, professional, and social aspects of the ICS degree experience prior to using RadGrad. These baseline results can be used in a future study to measure if RadGrad has had any effects on the students.

2016

Emily Hill, Philip M. Johnson, and Daniel Port. Is an athletic approach the future of software engineering education? IEEE Software, January 2016. [ .pdf ]

In the past 10 years, there has been considerable evidence of the harmful effects of multitasking and other distractions on learning. One study found that multitasking students spend only 65 percent of their time actively learning, take longer to complete assignments, make more mistakes, are less able to remember material later, and show less ability to generalize the information they learned for use in other contexts. Traditional software engineering education approaches: in-class lectures, unsupervised homework assignments, and occasional projects, create many opportunities for distraction. To address this problem, coauthor Philip M. Johnson developed an “athletic” software engineering education approach, which coauthors Emily Hill and Daniel Port adapted for use in their courses. We wanted to determine if software engineering education could be redesigned to be like an athletic endeavor and whether this would improve learning.

Philip M. Johnson, Daniel Port, and Emily Hill. An athletic approach to software engineering education. In Proceedings of the 29th IEEE conference on software engineering education and training, Dallas, Texas, USA, April 2016. [ .pdf ]

We present our findings after two years of experience involving three instructors using an “athletic” approach to software engineering education (AthSE). Co-author Johnson developed AthSE in 2013 to address issues he experienced teaching graduate and undergraduate software engineering. Co-authors Port and Hill subsequently adapted the original approach to their own software courses. AthSE is a pedagogy in which the course is organized into a series of skills to be mastered. For each skill, students are given practice "Workouts" along with videos showing the instructor performing the Workout both correctly and quickly. Unlike traditional homework assignments, students are advised to repeat the Workout not only until they can complete it correctly, but also as quickly as the instructor. In this experience report we investigate the following question: how can software engineering education be redesigned as an athletic endeavor, and will this provide more efficient and effective learning among students and more rapidly lead them to greater competency and confidence?

Anthony J. Christe, Sergey Negrashov, and Philip M. Johnson. Openpowerquality: An open source framework for power quality collection, analysis, visualization, and privacy. In Proceedings of the Seventh Conference on Innovative Smart Grid Technologies (ISGT2016), Minneapolis, MN, USA, September 2016. [ .pdf ]

As power grids transition from a centralized distribution model to a distributed model, maintaining grid stability requires real-time power quality (PQ) monitoring and visualization. As part of the Open Power Quality (OPQ) project, we designed and deployed a set of open source power quality monitors and an open source cloud-based aggregation and visualization system built with the utility customer in mind. Our aim is to leverage a flexible privacy model combined with inexpensive and easy to use PQ meters in order to deploy a high density power quality monitoring network across the Hawaiian islands. In this paper we describe OPHub, a privacy focused open source PQ visualization along with results of a small scale deployment of our prototype PQ meter across the island of Oahu. Our results demonstrate that OPQ can provide useful power quality data at an order of magnitude less cost than prior approaches.

Yongwen Xu, Philip M. Johnson, George E. Lee, Carleton A. Moore, and Robert S. Brewer. Sustainability, Green IT and Education Strategies in the 21st Century, chapter Design and evaluation of the Makahiki open source serious game framework for sustainability education. Springer, 2016. [ .pdf ]

Sustainability education and conservation have become an international imperative due to the rising cost of energy, increasing scarcity of natural resource and irresponsible environmental practices. This paper presents Makahiki, an open source serious game framework for sustainability, which implements an extensible framework for different organizations to develop sustainability games. It provides a variety of built-in games and content focused on sustainability; game mechanics such as leaderboards, points, and badges; a variety of common services such as authentication, real-time game analytics and ability to deploy to the cloud, as well as a responsive user interface for both computer and mobile devices. The successful implementation of six sustainability educational games in different organizations provides evidence regarding the ability to customize the Makahiki framework successfully to different environments in both organizational and infrastructure aspects. A serious game stakeholder experience based access method (SGSEAM) was used to formally evaluate Makahiki in order to understand the strengths and weaknesses of Makahiki as a useful serious game framework for sustainability.

Gregory L. Burgess. MANDE: procedural optimization and measurement of passive acoustic sensor networks for animal observation in marine environments. Technical Report CSDL-16-04, University of Hawaii, Honolulu, HI, August 2016. [ .pdf ]

Static Observation Networks (SONs) are often used in the biological sciences to study animal migration and habitat. These networks are comprised of self-contained, stationary receivers that continuously listen for acoustic transmissions released by sonic tags carried by individual animals. The transmissions released by these tags carry serial identification numbers that can be used to verify that a particular individual was near a given receiver. Because receivers in these networks are stationary, receiver placement is critical to maximizing data recovery. Currently, no open-source automated mechanism exists to facilitate the design of optimal receiver networks. SON design is often governed by loose "rules of thumb" and "by eye" readings of low resolution bathymetric maps. Moreover, there is no standardized method for evaluating the efficacy of a SON. This paper introduces the Maximal Acoustic Network Designer (MANDe) a system which takes advantage of high-resolution bathymetric data and advanced animal modeling to provide optimal network designs. MANDe also allows for statistical analysis of existing network configurations in order to create efficacy-metrics that can be used to evaluate arbitrary network configurations. This paper will present MANDe’s mathematical and conceptual models and analyze the computational complexities of its methods.

2015

Yongwen Xu. Makahiki and SGSEAM: Design and Evaluation of A Serious Game Framework for Sustainability and Stakeholder Experience Assessment Method. PhD thesis, University of Hawaii, Department of Information and Computer Sciences, August 2015. [ .pdf ]

Sustainability education and conservation have become an international imperative due to the rising cost of energy, increasing scarcity of natural resource and irresponsible environmental practices. Over the past decade, running energy and water challenges is the focal point for sustainability efforts at both university and industry campuses. Designers of such challenges typically have three choices for information technology: (a) build their own custom in-house solution; (b) out-source to a commercial provider; or (c) use a minimal tech solution such as a web page and manual posting of data and results. None of these choices are ideal: the custom in-house solution requires sophisticated design and implementation skills; out-sourcing can be financially expensive and impedes evolution; and the minimal tech solution does not fully leverage the possibilities of advanced information technology. To provide a better alternative to these three choices, I have led an effort over the past years to design and implement an open source serious game framework for sustainability called Makahiki. Makahiki implements an extensible framework with a variety of common services for developing sustainability games including authentication; game mechanics such as leaderboards, points, and badges; a variety of built-in games and content focused in sustainability, a responsive user interface, cloud-based deployment, and the ability to customize to the needs of individual organizations. Makahiki lowers the overhead to those who would build a custom in-house solution by providing pre-built components. It can lower the financial cost to those who would out-source by providing an open source alternative. Finally, it provides an opportunity for those who would choose a minimal tech solution to instead provide more sophisticated information technology. To provide initial evidence regarding the ability of the Makahiki framework to support sustainability games in different environments, we ran seven challenges at four organizations: The University of Hawaii at Manoa, Hawaii Pacific University, the East-West Center and Holy Nativity School. While these experiences provided anecdotal evidence for the usefulness of Makahiki, we realized that a more rigorous evaluation of the framework would yield better quality insight into its current quality and requirements for future enhancement. Upon review of the literature, we found little research or experience with formal serious game framework assessment. To address this, I have embarked on research to design an assessment mechanism for serious game frameworks, called Serious Game Stakeholder Experience Assessment Method (SGSEAM). SGSEAM is designed to provide detailed insight into the strengths and weaknesses of a serious game framework through a stakeholder perspective based approach. In my research, I applied SGSEAM to Makahiki in order to gain better insight into its strengths and weaknesses as a serious game framework. The contributions of my research thus includes: the Makahiki as the serious game framework for sustainability; the SGSEAM assessment method; the insights into creating and running a variety of real-world serious games for sustainability in different organizations; the insights into managing cloud based serious games; and the insights into serious game framework design and assessment method generated through application of SGSEAM to Makahiki. I hope this research will be of interest to researchers and practitioners across several disciplines: software engineering, game design, and sustainability research.

2014

Robert S. Brewer. Three shifts for sustainable HCI: Scalable, sticky, and multidisciplinary. In Proceedings of the CHI 2014 Workshop “What have we learned? A SIGCHI HCI & Sustainability community workshop”, Toronto, Canada, April 2014. [ .pdf ]

While there has been a steady increase in sustainable HCI research, there remains a lack of consensus on how to ensure this research moves us towards achieving sustainability. This paper suggests three ways the sustainable HCI community might shift to better address the challenge of achieving global sustainability. First, we should shift from creating only small-scale solutions to systems and solutions that are scalable to many users and environments because the problem of sustainability is vast in scale. Second, we should shift from short-term solutions to `sticky' solutions that will continue to have an impact over decades, because sustainability is a problem that will span generations. Third, the sustainable HCI community must shift from an insular focus on our community to a broad engagement and collaboration with other research communities involved in sustainability research.

Philip M. Johnson. Enabling active participation in the Smart Grid through crowdsourced power quality data. Technical Report CSDL-14-01, University of Hawaii, Honolulu, HI, April 2014. [ .pdf ]

This technical report presents a research project designed to gain insight into the following questions: Can crowdsourced power quality data enable active participation in the Smart Grid? What are the technical, social, behavioral, and economic requirements for crowdsourced data that make it effective for detection, monitoring, prediction and diagnosis of selected Smart Grid power quality issues? And finally, how can these project outcomes improve “citizen science” in general and the kinds of intrinsic and extrinsic motivators needed for successful outcomes?

Keone Hiraide. Kukini: The challenges in the design, implementation, and evaluation of a digital records transfer tool for the Hawaii State Digital Archives. Master's thesis, University of Hawaii, April 2014. [ .pdf ]

At the Hawaii State Archives, there is a need to update their digital records preservation capabilities. Thus, they are currently in the process of implementing a records system which has been designed to store, protect, and preserve digital records. The types of digital records include medical records, annual reports, birth records, etc. This records system requires a Digital Records Transfer tool which must provide government agencies of Hawaii with the ability to transfer digital records to the Hawaii State Archives. Its transfer process must use secure and authenticated methods that document and ensure that the entirety of the les have been transferred uncorrupted. Kukini is a digital records transfer tool that has been designed, implemented, tested, and evaluated for use within an archival framework. This paper discusses the design, implementation, and evaluation of Kukini.

Jordan Takayama. Simplifying sustainability game design: A usability evaluation of the Makahiki virtualmachine installation and the Smart Grid game designer. Master's thesis, University of Hawaii, April 2014. [ .pdf ]

The usability of an application is a measure of how effectively it can be used to perform the tasks it was designed for in its target environment. A user interface – the toolbars, menus, and other elements that control an application – determines how quickly and correctly users can complete tasks. Makahiki is an application framework for designing serious games (games which teach a serious subject) focused on energy conservation, recycling, and clean energy issues. Two features were added to Makahiki in response to user feedback: support for a cross-platform installation method for virtual machines, and a simplified drag-and-drop graphical user interface called the Smart Grid Game Designer (SGG). Usability testing data and feedback on these new features was compared to data and feedback from the previous iteration of Makahiki to determine the effect of these features on the user experience. It was found that the virtual machine installation produced significant improvements in user experience and configuration time. However, users who tested the Smart Grid Game Designer reported issues in understanding Makahiki's "predicate system" of relationships between game tasks that were similar to issues reported by users of the previous iteration of Makahiki.

Anthony J. Christe. OPQ Cloud: A scalable software framework for the aggregation of distributed power quality data. Technical Report CSDL-14-04, University of Hawaii, Honolulu, HI, April 2014. [ .pdf ]

Power quality issues can be caused in a variety of situations. Voltage fluctuations, frequency fluctuations, and harmonics are all power quality issues which can be caused by weather, high penetration of renewables, man-made issues, or other natural phenomena. We designed a software framework which can aggregate crowdsourced distributed power quality measurements in order to study power quality issues over a dense geographic area.

Pavel Senin, Jessica Lin, Xing Wang, Tim Oates, Sunil Gandhi, Arnold P. Boedihardjo, Crystal Chen, Susan Frankenstein, and Manfred Lerner. Grammarviz 2.0: A tool for grammar-based pattern discovery in time series. In Proceedings of ECML PKDD 2014, Nancy, France, September 2014. [ .pdf ]

The problem of frequent and anomalous patterns discovery in time series has received a lot of attention in the past decade. Addressing the common limitation of existing techniques, which require a pattern length to be known in advance, we recently proposed grammar-based algorithms for efficient discovery of variable length frequent and rare patterns. In this paper we present GrammarViz2.0, an interactive tool that, based on our previous work, implements algorithms for grammar-driven mining and visualization of variable length time series patterns.

Christina Sablan, Leilani Pena, and Philip M. Johnson. The kukui cup at uh manoa: Lessons learned in 2014 and prospects for new partnerships in campus sustainability. Technical Report CSDL-14-07, University of Hawaii, Honolulu, HI, May 2014. [ .pdf ]

This report identifies lessons learned from the Kukui Cup, and opportunities to integrate the Kukui Cup with a comprehensive, institutionalized program of sustainability at the University of Hawaii (UH). This report focuses on findings from the Spring 2014 challenge, based on qualitative interviews with CSDL staff, directors from the UH Residential Life Office, and the Sustainability Coordinator; online surveys distributed to Residential Assistants by the Residential Directors; and participant observation.

Yongwen Xu, Philip M. Johnson, George E. Lee, Carleton A. Moore, and Robert S. Brewer. Makahiki: An open source serious game framework for sustainability education and conservation. In Proceedings of the 2014 International Conference on Sustainability, Technology, and Education, Taipei City, Taiwan, December 2014. [ .pdf ]

Sustainability education and conservation have become an international imperative due to the rising cost of energy, increasing scarcity of natural resource and irresponsible environmental practices. This paper presents Makahiki, an open source serious game framework for sustainability, which implements an extensible framework for different organizations to develop sustainability games. It provides a variety of built-in games and content focused on sustainability; game mechanics such as leaderboards, points, and badges; a variety of common services such as authentication, real-time game analytics and ability to deploy to the cloud, as well as a responsive user interface for both computer and mobile devices. The successful implementation of six sustainability educational games in different organizations provides evidence regarding the ability to customize the Makahiki framework successfully to different environments.

Sergey Negrashov. Design, implementation, and initial evaluation of OPQBox: A low-cost device for crowdsourced power quality monitoring. Technical Report CSDL-14-11, University of Hawaii, Honolulu, HI, November 2014. [ .pdf ]

The face of power distribution has changed rapidly over the last several decades. Modern grids are evolving to accommodate distributed power generation, and highly variable loads. Furthermore as the devices we use every day become more electronically complex, they become increasingly more sensitive to power quality problems. Distributed power quality monitoring systems have been shown to provide real-time insight on the status of the power grid and even pinpoint the origin of power disturbances. [6] Oahu’s isolated power grid combined with high penetration of distributed renewable energy generators create perfect conditions to assess the feasibility and utility of such a network. Over the last three months we have been collecting power quality data from several locations on Oahu as a pilot study for a larger monitoring system. This papers describes our methodology, hardware and software design and presents a preliminary analysis of the data we collected so far. Lastly this paper presents a design for an improved power quality monitor based upon the pilot study experiences.

2013

Robert S. Brewer. Fostering Sustained Energy Behavior Change And Increasing Energy Literacy In A Student Housing Energy Challenge. PhD thesis, University of Hawaii, Department of Information and Computer Sciences, March 2013. [ .pdf ]

We designed the Kukui Cup challenge to foster energy conservation and increase energy literacy. Based on a review of the literature, the challenge combined a variety of elements into an overall game experience, including: real-time energy feedback, goals, commitments, competition, and prizes. We designed a software system called Makahiki to provide the online portion of the Kukui Cup challenge. Energy use was monitored by smart meters installed on each floor of the Hale Aloha residence halls on the University of Hawai`i at Manoa campus. In October 2011, we ran the UH Kukui Cup challenge for the over 1000 residents of the Hale Aloha towers. To evaluate the Kukui Cup challenge, I conducted three experiments: challenge participation, energy literacy, and energy use. Many residents participated in the challenge, as measured by points earned and actions completed through the challenge website. I measured the energy literacy of a random sample of Hale Aloha residents using an online energy literacy questionnaire administered before and after the challenge. I found that challenge participants' energy knowledge increased significantly compared to non-challenge participants. Positive self-reported energy behaviors increased after the challenge for both challenge participants and non-participants, leading to the possibility of passive participation by the non-challenge participants. I found that energy use varied substantially between and within lounges over time. Variations in energy use over time complicated the selection of a baseline of energy use to compare the levels during and after the challenge. The best team reduced its energy use during the challenge by 16%. However, team energy conservation did not appear to correlate to participation in the challenge, and there was no evidence of sustained energy conservation after the challenge. The problems inherent in assessing energy conservation using a baseline call into question this common practice. My research has generated several contributions, including: a demonstration of increased energy literacy as a result of the challenge, the discovery of fundamental problems with the use of baselines for assessing energy competitions, the creation of two open source software systems, and the creation of an energy literacy assessment instrument.

Philip M. Johnson, Yongwen Xu, Robert S. Brewer, Carleton A. Moore, George E. Lee, and Andrea Connell. Makahiki+WattDepot: An open source software stack for next generation energy research and education. In Proceedings of the 2013 Conference on Information and Communication Technologies for Sustainability (ICT4S), February 2013. [ .pdf ]

Satisfying the radically different requirements and operating assumptions of the next generation smart grid requires new kinds of software that enable research and experimentation into the ways that electrical energy production and consumption can be collected, analyzed, visualized, and provided to consumers. Since 2009, we have been designing, implementing, and evaluating an open source software “stack” to facilitate this research. This software stack consists of two custom systems called WattDepot and Makahiki, along with the open source components they rely upon (Java, Restlet, Postgres, Python, Django, Memcache). In this paper, we detail the novel features of WattDepot and Makahiki, our experiences using them for research and education, and additional ways they can be used for next generation energy research and education.

Philip M. Johnson. Searching under the streetlight for useful software analytics. IEEE Software, July 2013. [ .pdf ]

For more than 15 years, researchers at the Collaborative Software Development Laboratory (CSDL) at the University of Hawaii at Manoa have looked for analytics that help developers understand and improve development processes and products. Through this research, we’ve come to believe that the “searching under the streetlight” metaphor is useful for understanding both our research and that of others in this area.

Robert S. Brewer, Yongwen Xu, George E. Lee, Michelle Katchuck, Carleton A. Moore, and Philip M. Johnson. Energy feedback for smart grid consumers: Lessons learned from the Kukui Cup. In Proceedings of Energy 2013, pages 120-126, March 2013. [ .pdf ]

To achieve the full benefits of the Smart Grid, end users must become active participants in the energy ecosystem. This paper presents the Kukui Cup challenge, a serious game designed around the topic of energy conservation which incorporates a variety of energy feedback visualizations, a multifaceted serious game with online educational activities, and real-world activities such as workshops and excursions. We describe our experiences in developing energy feedback visualizations in the Kukui Cup based on in-lab evaluations and field studies in college residence halls. We learned that energy feedback systems should address these factors: they should be actionable, that domain knowledge must go hand in hand with energy feedback systems, and that this feedback must be “sticky” to lead to changes in behaviors and attitudes.

Jordan Takayama. Making game design as easy as gaming: Creating an administrative interface for the makahiki framework. Technical Report CSDL-13-02, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2013. [ .pdf ]

The usability of an application is a measure of how effectively it can be used to perform the tasks it was designed for in its target environment. A user interface the toolbars, menus, and other elements that control an application determines how quickly and correctly users can complete tasks. Makahiki is an application framework for designing serious games (games which teach a serious subject) focused on energy conservation, recycling, and clean energy issues. A problem with the current iteration of Makahiki is that creating competitions in its administrator interface is time-consuming. To identify the reasons for this problem, I will work with the Makahiki development team to distribute surveys to identify usability issues. For the first survey, University of Hawai‘i at M¯anoa students will configure Makahiki for a course assignment, self-report the time required for each part of the configuration, and describe usability problems. I will develop a design tool that will address these problems. After the design tool is completed, some of the first surveys questions will be reused with a second group of test subjects, comparing their performance with the design tool against the first groups performance with the original application on a subset of the same tasks. This will determine if configuration times decreased and the usability issues of the original application were addressed by the redesign. The collection of usability data and the creation of the design tool will address Makahikis usability problems while enhancing the understanding of how user interface design styles affect usability.

Yongwen Xu, Philip M. Johnson, Carleton A. Moore, Robert S. Brewer, and Jordan Takayama. SGSEAM: Assessing serious game frameworks from a stakeholder experience perspective. In Proceedings of the First International Conference on Gameful Design, Research, and Applications (Gamification 2013), October 2013. [ .pdf ]

Assessment of serious game frameworks is emerging as an important area of research. This paper describes an assessment mechanism called the Serious Game Stakeholder Experience Assessment Method (SGSEAM). SGSEAM is designed to provide detailed insights into the strengths and shortcomings of serious game frameworks through a stakeholder perspective based approach. In this paper, we report on the use of SGSEAM to assess Makahiki, an open source serious game framework for sustainability. Our results provide useful insights into both Makahiki as a serious game framework and SGSEAM as an assessment method.

Robert S. Brewer, Yongwen Xu, George E. Lee, Michelle Katchuck, Carleton A. Moore, and Philip M. Johnson. Three principles for the design of energy feedback visualizations. International Journal On Advances in Intelligent Systems, 3 & 4(6):188-198, 2013. [ .pdf ]

To achieve the full benefits of the Smart Grid, end users must become active participants in the energy ecosystem. This paper presents the Kukui Cup challenge, a multifaceted serious game designed around the topic of energy conservation that incorporates a variety of energy feedback visualizations, online educational activities, and real-world activities such as workshops and excursions. We describe our experiences developing energy feedback visualizations in the Kukui Cup based on in-lab evaluations and field studies in college residence halls. We learned that energy feedback systems should address these three factors: 1) they should be actionable, 2) domain knowledge should go hand in hand with feedback systems, and 3) feedback must be “sticky” if it is to lead to changes in behaviors and attitudes. We provide examples of both successful and unsuccessful visualizations, and discuss how they address the three factors we have identified.

2012

George E. Lee. Makahiki: An extensible open-source platform for creating energy competitions. Master's thesis, University of Hawaii, June 2012. [ .pdf ]

Due to rising costs and the questionable future of our non-renewable energy reserves, individuals need to become aware of their energy usage. In order to instill these habits earlier, organizations have held energy competitions to promote the reduction of energy. This also has the side effect of reducing the energy cost to the organization holding the competition. Typically, these competitions are held in colleges and universities and there are companies that can provide hardware and software to support them. However, since such solutions can be expensive, we would like a free, open source solution that can be used by any organization. We created Makahiki to be an open source framework for sustainability competitions. We also designed it to be a platform for researchers to investigate user behaviors during an energy competition. However, in order to validate our design, we need to evaluate and test our design. During the course of development, we had three evaluation phases. In the mockup phase, we validated our design before doing any implementation. In the onboarding phase, we investigated how individuals will interact with the system when they visit it for the first time. Finally, in the beta phase, we simulated the competition on a much smaller scale in order to observe how Makahiki might be used in an actual competition. Following these evaluations, Makahiki was used to support the 2011 Kukui Cup, which was held in mid-October. In summary, we claim the following contributions: 1. An open source system for creating serious games for energy competitions. 2. A research platform on which researchers can observe user behavior during energy competitions. 3. A methodology for evaluating and testing serious games that involve competitions over a period of time.

George E. Lee, Yongwen Xu, Robert S. Brewer, and Philip M. Johnson. Makahiki: An open source game engine for energy education and conservation. Technical Report CSDL-11-07, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 2012. [ .pdf ]

The rising cost, increasing scarcity, and environmental impact of fossil fuels as an energy source makes a transition to cleaner, renewable energy sources an international imperative. This paper presents Makahiki, an open source game engine for energy education and conservation. Developed for a residence hall energy competition, Makahiki facilitates the implementation of “serious games” that motivate players to learn about energy issues, improve their intuition about energy consumption, and understand how to use energy more efficiently in their normal life. Initial deployment of Makahiki at the University of Hawaii in Fall 2011 has revealed useful insights into its game mechanics, ways to improve the next Kukui Cup challenge, and insights into the changes we need to make to better facilitate adaptation to other energy contexts.

Pavel Senin. Recognizing recurrent development behaviors corresponding to android os release life-cycle. In Proceedings of the 2012 International Conference on Software Engineering Research and Practice, Las Vegas, NV, May 2012. [ .pdf ]

Within the field of software repository mining (MSR) researchers deal with a problem of discovery of interesting and actionable information about software projects. It is a common practice to perform analyzes on the various levels of abstraction of change events, for example by aggregating change-events into time-series. Following this, I investigate the applicability of SAX-based approximation and indexing of time-series with tf*idf weights in order to discover recurrent behaviors within development process. The proposed workflow starts by extracting and aggregating of revision control data and followed by reduction and transformation of aggregated data into symbolic space with PAA and SAX. Resulting SAX words then grouped into dictionaries associated with software process constraints known to influence behaviors, such as time, location, employment, etc. These, in turn, are investigated with the use of tf*idf statistics as a dissimilarity measure in order to discover behavioral patterns. As a proof of the concept I have applied this technique to software process artifact trails corresponding to Android OS1 development, where it was able to discover recurrent behaviors in the “new code lines dynamics” before and after release. By building a classifier upon these behaviors, I was able to successfully recognize pre- and post-release behaviors within the same and similar sub-projects of Android OS.

Robert S. Brewer. Results from energy audit of Hale Aloha. Technical Report CSDL-11-12, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, Jan 2012. [ .pdf ]

Matthias Fripp, Philip M. Johnson, Alexandar Kavcic, Anthony Kuh, and Dora Nakafuji. A proposal for a smart, sustainable microgrid for the university of hawaii at manoa campus. Technical Report CSDL-12-02, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, February 2012. [ .pdf ]

The state of Hawaii is more dependent on oil than any other state in the nation, using it for most electricity generation as well as transportation. The state-sponsored Hawaii Clean Energy Initiative calls for Hawaii to sharply reduce this dependence, obtaining 70 percent of its energy from clean energy sources by 2030. The University of Hawaii is playing a major role in this effort by conducting research, education, and workforce training in energy and sustainability. The project considers both theoretical and practical aspects of response, control and status on a local, interconnected sub-system of the grid and elucidates its behavior when distributed renewable energy sources are added. The result will be a smart, sustainable microgrid. Four interlinked research projects will be integrated into a graduate and undergraduate education program on smart grids, renewable energy, and energy efficiency.

Philip M. Johnson, Yongwen Xu, Robert S. Brewer, George E. Lee, Michelle Katchuck, and Carleton A. Moore. Beyond kWh: Myths and fixes for energy competition game design. In Proceedings of Meaningful Play 2012, pages 1-10, October 2012. [ .pdf ]

The Kukui Cup project investigates the use of “meaningful play” to facilitate energy awareness, conservation and behavioral change. Each Kukui Cup Challenge combines real world and online environments in an attempt to combine information technology, game mechanics, educational pedagogy, and incentives in a synergistic and engaging fashion. We challenge players to: (1) acquire more sophistication about energy concepts and (2) experiment with new behaviors ranging from micro (such as turning off the lights or installing a CFL) to macro (such as taking energy-related courses, joining environmental groups, and political/social advocacy.) To inform the design of the inaugural 2011 Kukui Cup, we relied heavily on prior collegiate energy competitions, of which there have been over 150 in the past few years. Published accounts of these competitions indicate that they achieve dramatic reductions in energy usage (a median of 22%) and cost savings of tens of thousands of dollars. In our case, the data collected from the 2011 Kukui Cup was generally in agreement, with observed energy reductions of up to 16% when using data collection and analysis techniques typical to these competitions. However, our analysis process caused us to look more closely at the methods employed to produce outcome data for energy competitions, with unexpected results. We now believe that energy competitions make significant unwarranted assumptions about the data they collect and the way they analyze them, which calls into question both the accuracy of published results from this literature and their effectiveness as serious games. We believe a closer examination of these issues by the community can help improve the design not only of future energy challenges, but other similar forms of serious games for sustainability. In this paper, we describe the Kukui Cup, the design myths it uncovered, and the fixes we propose to improve future forms of meaningful play with respect to energy in particular and serious games in general.

Sara K. Cobble. Encouraging environmental literacy on campus: A case study of the kukui cup. Technical Report CSDL-12-14, College of Humanities and Social Sciences, Hawaii Pacific University, Honolulu, Hawaii, December 2012. [ .pdf ]

Environmental literacy measures a person's understanding of ecological principles and the ways in which human systems interact with the environment. It falls on a continuum of varying degrees of aptitude, from nominal to functional to operational, and includes behaviors, attitudes, concerns and knowledge about the environment (Roth, 1992). This skill-set comprises both cognitive and affective types of knowledge. A high level of environmental literacy will be necessary to navigate a future in which these skills are needed (King, 2000). Unfortunately, only 1-2 percent of American adults are considered environmentally literate (Coyle, 2006). Environmental education, the key for producing environmentally literate citizens, has been on the rise since it emerged in the 1970s. At the university level, the number of sustainability programs and initiatives is inspiring (Shephard, 2006). However, many of the changes in higher education have been on physical campuses and not inside the classroom, and sustainability is seen more as a prescriptive fix than a radical change in attitude, concern, knowledge and behavior (Sherman, 2008; 2011). A recent trend on university campuses has been energy-saving competitions in university buildings and on-campus dormitories — over 150 of these competitions have taken place in the last few years, with median energy reductions of 22 percent (Johnson et al, 2011). This paper is a case study of one of those competitions: the Kukui Cup at Hawai'i Pacific University (HPU). In the three-weeklong competition, students living on campus played an online game and participated in associated educational activities using resources from the Collaborative Software Design Laboratory at the University of Hawaii at Mānoa. The Kukui Cup was an attempt to use gamification techniques, competition and technology to encourage changes in environmental behaviors, attitudes, concerns and knowledge of oncampus residents, with hopes of improving their overall levels of energy and environmental literacy. This study aims to answer the questions: What is the level of environmental literacy of dorm residents at HPU, and how is it affected by participation in an on-campus energy-saving competition?

2011

Robert S. Brewer, George E. Lee, and Philip M. Johnson. The Kukui Cup: a dorm energy competition focused on sustainable behavior change and energy literacy. In Proceedings of the 44th Hawaii International Conference on System Sciences, pages 1-10, January 2011. [ .pdf ]

The Kukui Cup is an advanced dorm energy competition whose goal is to investigate the relationships among energy literacy, sustained energy conservation, and information technology support of behavior change. Two general purpose open source systems have been implemented: WattDepot and Makahiki. WattDepot provides enterprise-level collection, storage, analysis, and visualization of energy data. Makahiki is a web application framework that supports dorm energy competitions of varying degrees of complexity, including a personalized homepage where participants can complete tasks designed to increase energy literacy that can be verified by competition administrators. The technology and approach will be evaluated in a dorm energy competition to take place in the Spring of 2011, with hundreds of University freshmen. The energy use of each pair of dormitory floors will be metered in near-realtime, and the energy literacy of participants will be assessed before and after the competition.

Robert S. Brewer, George E. Lee, Yongwen Xu, Caterina Desiato, Michelle Katchuck, and Philip M. Johnson. Lights Off. Game On. The Kukui Cup: A dorm energy competition. In Proceedings of the CHI 2011 Workshop on Gamification, Vancouver, Canada, May 2011. [ .pdf ]

Our research seeks to investigate the relationships among energy literacy, sustained energy conservation, and information technology support of behavior change through an advanced dorm energy competition to take place in Fall 2011. Game design techniques are used to attract competition participants, keep them engaged, and have a lasting impact in their energy use behavior through retained knowledge of energy obtained via the game environment.

Robert S. Brewer. The Kukui Cup: Shaping everyday energy use via a dorm energy competition. In Proceedings of the CHI 2011 Workshop on Everyday Practice and Sustainable HCI, Vancouver, Canada, May 2011. [ .pdf ]

Our research seeks to investigate the relationships among energy literacy, sustained energy conservation, and information technology support of behavior change through an advanced dorm energy competition to take place in Fall 2011. The competition will attempt to foster changes in participants' everyday energy use by increasing their energy literacy and changing their habits through activities performed during the competition.

Philip M. Johnson. Results from the Kukui Cup anonymous questionnaire for RAs. Technical Report CSDL-11-08, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, Nov 2011. [ .pdf ]

Kaveh Abhari, Hana Bowers, Robert S. Brewer, Gregory Burgess, Caterina Desiato, Philip M. Johnson, Michelle Katchuck, Risa Khamsi, George E. Lee, Yongwen Xu, Alex Young, and Chris Zorn. Poster: Lights off. game on. the 2011 kukui cup. Behavior, Energy, and Climate Change (BECC) 2011 Poster Session, Washington, DC., November 2011. [ .pdf ]

This poster presents the Kukui Cup energy challenge and early results from its use in 2011.

2010

Robert S. Brewer. Fostering sustained energy behavior change and increasing energy literacy in a student housing energy competition. Technical Report CSDL-10-02, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2010. [ .pdf ]

The world is in the grip of a crisis in the way energy is produced and consumed. Climate change represents a huge threat to the modern way of life, particularly for island communities like Hawaii. Many changes to our energy system will be required to resolve the crisis, and one promising part of the solution is reducing energy usage through changes in behavior. Energy usage in similar homes can differ by a factor of two to four times, demonstrating the potential contribution of behavior change to the crisis. This research project seeks to find ways to foster sustainable changes in behavior that lead to reduced energy usage. The research will be conducted in the context of a dorm energy competition on the UH Manoa campus in October 2010. Power meters will be installed on each floor of two freshmen residence halls. Each floor will compete to use the least energy during the 4 week competition. A competition website will be created, where participants can log in to view near-realtime data about their floor's power usage, and also select from a variety of tasks to perform. Each task is designed to increase the participant's energy literacy (knowledge, positive attitudes, and behaviors related to energy), and a certain number of points are assigned for the completion of each task. The points provide a parallel competition to motivate participants to perform the tasks. Prizes will be awarded to floors using the least energy, and participants obtaining the most points. Several research questions will be investigated using the data collected, including how energy usage changed after the competition is over, whether the website tasks affected energy literacy, and whether floors that had higher energy literacy had more sustained energy conservation after the competition was complete. The research questions will be investigated using energy data from the meters, log files from the website, and an energy literacy survey administered before and after the competition.

Philip M. Johnson. The Kukui Cup: Proposal for a UH residence hall energy competition. Technical Report CSDL-10-03, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, February 2010. [ .pdf ]

Kukui nut oil was used by ancient Hawaiians to light their homes. In honor of this original form of energy in the islands, we propose to design and implement a Dorm Energy Competition for the University of Hawaii called the “Kukui Cup”. It will be held for the first time during the month of October, 2011. The three goals of this project are: (1) Improve the energy literacy of participating students; (2) Conduct innovative research in information technology for energy-related behavioral change; and (3) Save money for the university by reducing energy costs. As part of this project, we will implement a new web application to provide information regarding UH Dorm Energy in general and the Kukui Cup competition in particular. This software will also support research on energy behavior by the Collaborative Software Development Laboratory in the Department of Information and Computer Sciences. We propose to hold the October, 2011 dorm energy competition in three freshman dorms, and then expand the program to include more dorms in future years.

George E. Lee. Makahiki: An extensible open-source platform for creating energy competitions. Technical Report CSDL-10-04, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2010. [ .pdf ]

Because of our ever-increasing population and our limited natural resources, improving the energy literacy of the population is becoming increasingly important. A way to promote these habits is to hold energy competitions to see who can reduce their energy usage the most. A popular place to hold these competitions are in University dorms, where students are making the transition from living with their parents to living on their own. Holding these competitions are a great way to educate the student population, but the development of the competition can be costly. Besides prizes for the winning individuals/dorms, creating and maintaining a development website can take a lot of time. Some groups have turned to software development firms that provide the software and hardware, but at a cost. We propose a system called Makahiki that will provide a free, open-source, and easy to implement solution. Using other open source tools such as WattDepot, we aim to create a configurable package for organizations who hope to hold their own energy competitions. To test our implementation, we will be holding a dorm energy competition here at the University of Hawaii at Manoa in October 2011. We will also test the configurability of our system by implementing another organization's dorm energy competition website.

Robert S. Brewer and Philip M. Johnson. WattDepot: An open source software ecosystem for enterprise-scale energy data collection, storage, analysis, and visualization. In Proceedings of the First International Conference on Smart Grid Communications, pages 91-95, Gaithersburg, MD, October 2010. [ .pdf ]

WattDepot is an open source, Internet-based, service-oriented framework for collection, storage, analysis, and visualization of energy data. WattDepot differs from other energy management solutions in one or more of the following ways: it is not tied to any specific metering technology; it provides high-level support for meter aggregation and data interpolation; it supports carbon intensity analysis; it is architecturally decoupled from the underlying storage technology; it supports both hosted and local energy services; it can provide near-real time data collection and feedback; and the software is open source and freely available. In this paper, we introduce the framework, provide examples of its use, and discuss its application to research and understanding of the Smart Grid.

Robert S. Brewer and Philip M. Johnson. WattDepot: Enterprise-scale, sensor-based energy data collection and analysis. Technical Report CSDL-10-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2010. [ .pdf ]

Enterprise-scale energy data collection and analysis is becoming increasingly important with the advent of the "Smart" grid. We have developed and released an open source, sensor-based system called WattDepot for collecting, storing and analyzing energy data to fill this niche that is greater than individual households but less than entire utility grids. WattDepot is designed to allow data collection from a wide variety of energy production and consumption devices, and to support diverse visualizations and delivery of the data. We are using WattDepot to support a campus dormitory energy competition for Fall 2010. Since the process of selecting, purchasing, and installing the meters is ongoing, we have developed an end-to-end simulation of dorm energy to ensure that the WattDepot software sensors would work with any of the chosen meters. WattDepot's sensor-based, service-oriented architecture makes it useful to a wide variety of energy application domains.

Pavel Senin. Software trajectory analysis: An empirically based method for automated software process discovery. In Proceedings of the Fifth International Doctoral Symposium on Empirical Software Engineering, Bolzano-Bozen, Italy, September 2010. [ .pdf ]

A process defines a set of routines which allow one to organize, manage and improve activities in order to reach a goal. With expert intuition and a-priori knowledge, software processes have been modeled for a long time, resulting in the Waterfall, Spiral and other development models. Later, with the wide use of SCM systems and the public availability of primitive software process artifact trails, formal methods such as Petri Nets, State Machines and others have been applied to the problem of recurrent process discovery and control. Recent advances in metrics effort, increased use of continuous integration, and extensive documentation of the performed process make information-rich fine-grained software process artifacts trails available for analysis. This fine-grained data has the potential to shed new light on the software process. In this work I propose to investigate an automated technique for the discovery and characterization of recurrent behaviors in software development - "programming habits" either on an individual or a team level.

Todd Baumeister. Literature review on smart grid cyber security. Technical Report CSDL-10-11, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2010. [ .pdf ]

The current U.S. electrical power grid is an out-of-date infrastructure, and the Smart Grid is an upgrade that will add many new functionalities to meet customers' new power requirements. Updating a system as complex as the electrical power grid has the potential of introducing new security vulnerabilities into the system. This document presents a review of the work related to Smart Grid cyber security. The work reviewed is separated into five categories that make up different components of the Smart Grid: Process Control System (PCS) Security, Smart Meter Security, Power System State Estimation Security, Smart Grid Communication Protocol Security, and Smart Grid Simulation for Security Analysis. The Smart Grid is a large complex system, and it still requires a lot of cyber security design work.

Philip M. Johnson and Robert S. Brewer. Poster: Wattdepot: Open source software for energy data collection and analysis. Behavior, Energy, and Climate Change (BECC) 2010 Poster Session, Sacramento, CA, November 2010. [ .pdf ]

This poster presents the components of the WattDepot system and early experiences with its use.

2009

Hongbing Kou, Philip M. Johnson, and Hakan Erdogmus. Operational definition and automated inference of test-driven development with Zorro. Journal of Automated Software Engineering, December 2009. [ .pdf ]

Test-driven development (TDD) is a style of development named for its most visible characteristic: the design and implementation of test cases prior to the implementation of the code required to make them pass. Many claims have been made for TDD: that it can improve implementation as well as design quality, that it can improve productivity, that it results in 100% coverage, and so forth. However, research to validate these claims has yielded mixed and sometimes contradictory results. We believe that at least part of the reason for these results stems from differing interpretations of the TDD development style, along with an inability to determine whether programmers actually follow whatever definition of TDD is in use. Zorro is a system designed to automatically determine whether a developer is complying with an operational definition of Test-Driven Development (TDD) practices. Automated recognition of TDD can benefit the software development community in a variety of ways, from inquiry into the “true nature” of TDD, to pedagogical aids to support the practice of test-driven development, to support for more rigorous empirical studies on the effectiveness of TDD in both laboratory and real world settings. This paper describes the Zorro system, its operational definition of TDD, the analyses made possible by Zorro, and two empirical evaluations of the system. Our research shows that it is possible to define an operational definition of TDD that is amenable to automated recognition, and illustrates the architectural and design issues that must be addressed in order to do so. Zorro has implications not only for the practice of TDD, but also for software engineering “micro-process” definition and recognition through its parent framework, Software Development Stream Analysis.

Philip M. Johnson and Shaoxuan Zhang. We need more coverage, stat! Experience with the software ICU. In Proceedings of the 2009 Conference on Empirical Software Engineering and Measurement, Orlando, Florida, October 2009. [ .pdf ]

For empirical software engineering to reach its fullest potential, we must develop effective, experiential approaches to learning about it in a classroom setting. In this paper, we report on a case study involving a new approach to classroom-based empirical software engineering called the “Software ICU”. In this approach, students learn about nine empirical project “vital signs” and use the Hackystat Framework to put their projects into a virtual “intensive care unit” where these vital signs can be assessed and monitored. We used both questionnaire and log data to gain insight into the strengths and weaknesses of this approach. Our evaluation provides both quantitative and qualitative evidence concerning the overhead of the system; the relative utility of different vital signs; the frequency of use; and the perceived appropriateness outside of the classroom setting. In addition to benefits, we found evidence of measurement dysfunction induced directly by the presence of the Software ICU. We compare these results to case studies we performed in 2003 and 2006 using the Hackystat Framework but not the Software ICU. We use these findings to orient future research on empirical software engineering both inside and outside of the classroom.

Shaoxuan Zhang and Philip M. Johnson. Results from the 2008 classroom evaluation of Hackystat. Technical Report CSDL-09-03, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, March 2009. [ .pdf ]

This report presents the results from a classroom evaluation of Hackystat by ICS 413 students at the end of Fall, 2008. The evaluation focuses on the use of the Software ICU user interface developed using Hackystat Version 8. Results indicate that sensor installation is somewhat more complicated than previously due to the absence of a client-side installer. The three most used "vital signs" were DevTime, Coverage, and Commit. Over half of the respondents felt that the Software ICU colors accurately represented the health of the project. Most students felt that the Software ICU would be useful in a professional context.

Robert S. Brewer. Literature review on carbon footprint collection and analysis. Technical Report CSDL-09-05, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 2009. [ .pdf ]

The Personal Environmental Tracker (PET) is a proposed system for helping people to track their impact on the environment via data collected from sensors, and to make changes to reduce that impact, creating a personal feedback loop. This document presents a review of the work related to this research program, including: environmental research, economic factors regarding energy efficiency, methods of providing feedback on energy usage, motivating users to change their behavior, suggestions for the design of persuasive environmental systems, a review of related systems, and the calculation of carbon emissions.

Philip M. Johnson, Shaoxuan Zhang, and Pavel Senin. Experiences with Hackystat as a service-oriented architecture. Technical Report CSDL-09-07, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, February 2009. [ .pdf ]

Hackystat is an open source framework for automated collection and analysis of software engineering process and product data. Hackystat has been in development since 2001, and has gone through eight major architectural revisions during that time. In 2007, we performed the latest architectural revision, whose primary goal was to reimplement Hackystat as a service-oriented architecture (SOA). This version has now been in public release for a year, and this paper reports on our experiences: the motivations that led us to reimplement the system as a SOA, the costs and benefits of that conversion, and our lessons learned.

Pavel Senin. Literature review on time series indexing. Technical Report CSDL-09-08, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2009. [ .pdf ]

Similarity search in time-series databases has become an active research area in the past decade due to the tremendous growth of the amount of temporal data collected and publicly available. The complexity of this similarity problem lies in the high dimensionality of the temporal data making convenient methods inappropriate. The most promising approaches involve dimensionality reduction and indexing techniques which are the subject of this review. After starting with a general introduction to the time-series and classical time-series analyses, we will discuss in detail time-series normalization techniques and relevant distance metrics. We conclude with a review of the dimensionality-reduction and indexing methods proposed to date.

Pavel Senin. Software trajectory analysis: An empirically based method for automated software process discovery. Ph.D. Thesis Proposal CSDL-09-09, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, August 2009. [ .pdf ]

In this research, I will apply knowledge discovery and data mining techniques to the domain of software engineering in order to evaluate their ability to automatically notice interesting recurrent patterns of behavior. While I am not proposing to be able to infer a complete and correct software process model, my system will provide its users with a formal description of recurrent behaviors in their software development. The proposed contributions of my research will include: (1) the implementation of a system aiding in discovery of novel software process knowledge through the analysis of fine-grained software process and product data; (2) experimental evaluation of the system, which will provide insight into its strengths and weaknesses; and (3) the possible discovery of useful new software process patterns.

Shaoxuan Zhang. Learning empirical software engineering using software intensive care unit. M.S. Thesis CSDL-09-10, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2009. [ .pdf ]

In software engineering, the importance of measurement is well understood, and many efficient software development metrics have been developed to help measurement. However, as the number of metrics increases, the effort required to collect data, analyze them and interpret the results quickly becomes overwhelming. This problem is even more critical in educational approaches regarding empirical software engineering. The Software Intensive Care Unit is a new approach to facilitating software measurement and control with multiple software development metrics. It uses the Hackystat system to achieve automated data collection and analysis, then uses the collected analysis data to create a monitoring interface for multiple “vital signs”. A vital sign is a wrapper of a software metric with an easy to use presentation. It consists of a historical trend and a newest state value, both of which are colored according to the “health” state. My research deployed and evaluated the Software ICU in a senior-level software engineering course. Students' usage was logged in the system, and a survey was conducted. The results provide supporting evidence that Software ICU does help students in course project development and project team organization. In addition, the results of the study also discover some limitations of the system, including inappropriate vital sign presentation and measurement dysfunction.

Philip M. Johnson. Empirical computational thinking. Technical Report CSDL-09-11, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2009. [ .pdf ]

This technical report presents an edited version of a proposal to the NSF CPATH program. The vision of this proposal is to develop and institutionalize a new approach to computational thinking where abstraction and automation combine to transform the use of empirical thinking in software development. We call this approach “empirical computational thinking”, or eCT. The goal of this research is to explore, evaluate, and institutionalize techniques and technologies for eCT, building upon research and education by ourselves and others in empirically-based software development.

Robert S. Brewer. Proposal for electricity conservation experiments in saunders hall. Technical Report CSDL-09-12, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, June 2009. [ .pdf ]

The University of Hawaii at Manoa has set the goals of reducing its electricity usage by 30 percent by 2012 and 50 percent by 2015 (based on a 2003 benchmark). A variety of tactics will be required to meet these aggressive goals. One promising technique is to encourage the occupants of buildings to reduce their electricity usage. There are a variety of possible interventions that may encourage occupants to reduce their electricity usage. To assess the relative effectiveness of the interventions, we plan a series of experiments in Saunders. However, the participants of each experiment will be the occupants of Saunders, rather than a set of participants recruited anew for each experiment. We expect two negative consequences of the continuity of the subjects: reduced subject interest/enthusiasm, and diminishing conservation returns.

Philip M. Johnson. Human centered information integration for the smart grid. Technical Report CSDL-09-15, University of Hawaii, Honolulu, HI, December 2009. [ .pdf ]

The "Smart Grid" represents a new vision for the electrical infrastructure of the United States, whose goals include more active participation by consumers, new generation and storage options including renewable energy, and new products, services, and markets. To reach its full potential, the Smart Grid must provide information to consumers in a way that enables positive, sustained changes to energy-related behaviors. The central question to be pursued in this research proposal is: What kinds of information, provided in what ways and at what times, enables consumers to make positive, sustained changes to their energy consumption behaviors? Prior research indicates that such changes can potentially be motivated by an appropriate combination of personalized information, general and specific commitments, achievable goals, social reinforcement, feedback, and financial incentives. This project will develop a collection of open source components called WattBlocks, which will provide novel and useful scientific infrastructure for investigating the ways in which energy-related information can affect human behavior. The project will also develop eSpheres, a novel social networking application that provides users with access to energy-related communities at configurable levels of scale. The combination of WattBlocks and eSpheres will lower the technological efforts required for empirical, replicable studies of human energy-related behaviors. The project will use this infrastructure in a series of two case studies, one involving campus dormitory energy competitions and one involving community home energy challenges. The project will investigate a number of important research questions, including: (1) What are the requirements for consumer-facing, open source, scientific energy information infrastructure? (2) What are the strengths and weaknesses of a dedicated social network technology like eSpheres for energy behavior change? (3) What combination of behavioral change motivators, under what conditions, induce positive change? (4) What factors influence the sustainability of these changes? (5) What is the influence of energy data feedback latency (i.e. 1 minute, 15 minutes, 1 hour, 1 day) on behavioral change?

Myriam Leggieri. Linked data applied to collaborative software development: A case study of hackystat. Technical Report CSDL-09-16, University of Bari, Italy, December 2009. [ .pdf ]

This thesis investigates a new way to take advantage of RDF metadata to support Collaborative Software Development. RDF metadata helps developers overcome typical problems in iterative software development, such as: exceptions thrown at run-time; making design and implementation decisions within previously unknown domains; and usage of previously unknown tools or libraries. Solutions usually consist of searching for suggestions from forum posts, source code of similar projects, direct contact with specific experts, etc. The main problems with this approach are the time wasted in manually detecting the searched info from unstructured documents, the low effectiveness of search engines, and the lack of information about the actual expertise of the directly contacted people. In contrast, having info about projects and issues semantically structured with RDF metadata can speed up detection of the searched details. Dynamic creation of RDF links with external similar RDF metadata allows users to avoid searching or analyzing search results. Finally, metadata about users including quality measures coming from a trustworthy source such as Telemetry can allow the user to trust the actual developer's expertise. Such RDF metadata and links, together with HTTP URIs, is provided by the Hackystat LinkedSensorData (LiSeD) service.

Herve Weitz. Applying case-based reasoning for building autonomic service-oriented systems. Technical Report CSDL-09-17, University of Limerick, Ireland, September 2009. [ .pdf ]

Service-oriented computing is considered as a successful approach building large-scale software systems, spanning the internet, and globally improving software reuse. Service-oriented architectures are complex and hard to maintain. A service may run on many machines, and single machines may host many services. The concept of distributed composition of services hides a huge amount of complexity in the management of the service-oriented architecture. Users have to deal with complex configuration of services to achieve functional and quality requirements, thus the complexity of the system requires a lot of administrator-interference. Despite the effort of the administrator, the configuration may not be good enough. It is hard for an administrator to monitor individual services and the service-oriented system to determine if the system is running optimal. Therefore a growing trend for autonomic service-oriented systems has emerged. In mid-october 2001, IBM released a manifesto that the main obstacle to further progress in the IT industry is a looming software complexity crisis. The manifesto claimed that the difficulty of managing today's computer systems goes well beyond the administration of individual software environments. Computing system's complexity appears to be approaching the limits of human capability, and there will be no way to make timely, decisive responses to the rapid stream of changing and conflicting. This dissertation discusses autonomic computing in service-oriented computing. We present a framework that builds the foundation for self-healing, self-reconfiguration, self-optimization and selfprotecting service-oriented systems. We apply and implement the framework to Hackystat, an Open Source Software developed at University of Hawaii. Furthermore we discuss the role of service-oriented computing in autonomic computing, which plays a fundamental role for the relationship between autonomic elements. At the end, we achieved to provide a global overview in the domain of autonomic and service-oriented computing and how to combine them in bidirectional ways. We implemented an open source framework called, Hackystat Service Manager, for achieving an autonomic service-oriented architecture in Hackystat in the scope of Google Summer of Code, which can be evolved and evaluated or adapted to any other service-oriented system.

2008

Robert S. Brewer. Carbon metric collection and analysis with the personal environmental tracker. In Proceedings of the UbiComp 2008 Workshop on Ubiquitous Sustainability: Citizen Science and Activism, Seoul, South Korea, September 2008. [ .pdf ]

The Personal Environmental Tracker (PET) is a proposed system for helping people to track their impact on the environment, and to make changes to reduce that impact, creating a personal feedback loop. PET consists of sensors that collect data such as home electricity or gasoline usage and send it to a database for analysis and presentation to the user. By collecting data from diverse sources, PET can help users decide what aspect of their lives they should make changes in first to maximize their reduction in environmental impact. PET's open architecture will allow other ubiquitous sustainability researchers to leverage the infrastructure for research in sensors, data analysis, or presentation of data.

Pavel Senin. Dynamic time warping algorithm review. Technical Report CSDL-08-04, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2008. [ .pdf ]

This technical report describes the Dynamic Time Warping algorithm and how it can be applied to support identification of similar software development projects through analysis of their telemetry data.

Alexey Olkov and Daniel Port. Using simulation to investigate IT micro-processes. Technical Report CSDL-08-05, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2008. [ .pdf ]

This technical report describes how simulation can be used to (1) gain confidence in empirical analysis of software micro-processes and (2) provide a means to validate or obtain evidence to support software engineering hypotheses and theory.

2007

Philip M. Johnson. Requirement and design trade-offs in Hackystat: An in-process software engineering measurement and analysis system. In Proceedings of the 2007 International Symposium on Empirical Software Engineering and Measurement, Madrid, Spain, September 2007. [ .pdf ]

For five years, the Hackystat Project has incrementally developed and evaluated a generic framework for in-process software engineering measurement and analysis (ISEMA). At least five other independent ISEMA system development projects have been initiated during this time, indicating growing interest and investment in this approach by the software engineering community. This paper presents 12 important requirement and design tradeoffs made in the Hackystat system, some of their implications for organizations wishing to introduce ISEMA, and six directions for future research and development. The three goals of this paper are to: (1) help potential users of ISEMA systems to better evaluate the relative strengths and weaknesses of current and future systems, (2) help potential developers of ISEMA systems to better understand some of the important requirement and design trade-offs that they must make, and (3) help accelerate progress in ISEMA by identifying promising directions for future research and development.

Victor R. Basili, Marvin V. Zelkowitz, Dag Sjoberg, Philip M. Johnson, and Tony Cowling. Protocols in the use of empirical software engineering artifacts. Empirical Software Engineering, 12, February 2007. [ .pdf ]

If empirical software engineering is to grow as a valid scientific endeavor, the ability to acquire, use, share, and compare data collected from a variety of sources must be encouraged. This is necessary to validate the formal models being developed within computer science. However, within the empirical software engineering community this has not been easily accomplished. This paper analyses experience from a number of projects, and defines the issues, which include the following: (1) How should data, testbeds, and artifacts be shared? (2) What limits should be placed on who can use them and how? How does one limit potential misuse? (3) What is the appropriate way to credit the organization and individual that spent the effort collecting the data, developing the testbed, and building the artifact? (4) Once shared, who owns the evolved asset? As a solution to these issues, the paper proposes a framework for an empirical software engineering artifact license, which is intended to address the needs for both creator and user of such artifacts and should foster a market in making available and using such artifacts. If this license framework for sharing software engineering artifacts is commonly accepted, it is considered that it should encourage artifact owners to make the artifacts accessible to others (gaining credit is more likely and misuse is less likely), and it may be easier for other researchers to request artifacts since there will be a well-defined protocol for how to deal with relevant matters.

Philip M. Johnson. Automated software process and product measurement with Hackystat. Dr. Dobbs Journal, January 2007.

This article presents an overview of Hackystat, a system for automated software process and product measurement.

Philip M. Johnson and Hongbing Kou. Automated recognition of test-driven development with Zorro. Proceedings of Agile 2007, August 2007. [ .pdf ]

Zorro is a system designed to automatically determine whether a developer is complying with an operational definition of Test-Driven Development (TDD) practices. Automated recognition of TDD can benefit the software development community in a variety of ways, from inquiry into the “true nature” of TDD, to pedagogical aids to support the practice of test-driven development, to support for more rigorous empirical studies on the effectiveness of TDD in both laboratory and real world settings. This paper introduces the Zorro system, its operational definition of TDD, the analyses made possible by Zorro, and our ongoing efforts to validate the system.

Philip M. Johnson. Ultra-automation and ultra-autonomy for software engineering management of ultra-large-scale systems. In Proceedings of the 2007 Workshop on Ultra Large Scale Systems, Minneapolis, Minnesota, May 2007. [ .pdf ]

“Ultra-Large-Scale Systems: The Software Challenge of the Future” identifies “Engineering Management at Large Scales” as an important focus of research. Engineering management for software typically involves measurement and monitoring of products and processes in order to maintain acceptable levels of important project characteristics including cost, quality, usability, performance, reliability, and so forth. Our research on software engineering measurement over the past ten years has exhibited a trend towards increasing automation and autonomy in the collection and analysis of process and product measures. In this position paper, we extrapolate from our work so far to consider what new forms of automation and autonomy might be required for software engineering management of ULS systems.

Hongbing Kou. Automated Inference of Software Development Behaviors: Design, Implementation and Validation of Zorro for Test-Driven Development. Ph.D. thesis, University of Hawaii, Department of Information and Computer Sciences, December 2007. [ .pdf ]

A recent focus of interest in software engineering research is on low-level software processes, which define how software developers or development teams should carry on development activities in short phases that last from several minutes to a few hours. Anecdotal evidence exists for the positive impact on quality and productivity of certain low-level software processes such as test-driven development and continuous integration. However, empirical research on low-level software processes often yields conflicting results. A significant threat to the validity of the empirical studies on low-level software processes is that they lack the ability to rigorously assess process conformance. That is to say, the degree to which developers follow the low-level software processes can not be evaluated. In order to improve the quality of empirical research on low-level software processes, I developed a technique called Software Development Stream Analysis (SDSA) that can infer development behaviors using automatically collected in-process software metrics. The collection of development activities is supported by Hackystat, a framework for automated software process and product metrics collection and analysis. SDSA abstracts the collected software metrics into a software development stream, a time-series data structure containing time-stamped development events. It then partitions the development stream into episodes, and then uses a rule-based system to infer low-level development behaviors exhibited in episodes. With the capabilities provided by Hackystat and SDSA, I developed the Zorro software system to study a specific low-level software process called Test-Driven Development (TDD). Experience reports have shown that TDD can greatly improve software quality with increased developer productivity, but empirical research findings on TDD are often mixed. An inability to rigorously assess process conformance is a possible explanation. Zorro can rigorously assess process conformance to a specific operational definition for TDD, and thus enable more controlled, comparable empirical studies. My research has demonstrated that Zorro can recognize the low-level software development behaviors that characterize TDD. Both the pilot and classroom case studies support this conclusion. The industrial case study shows that the automated data collection and development behavior inference has the potential to be useful for researchers.

2006

Lutz Prechelt, Sebastian Jekutsch, and Philip M. Johnson. Actual process: A research program. In Submitted to the 2006 Workshop on Software Process, May 2006. [ .pdf ]

Most process research relies heavily on the use of terms and concepts whose validity depends on a variety of assumptions to be met. As it is difficult to guarantee that they are met, such work continually runs the risk of being invalid. We propose a different and complementary approach to understanding process: Perform all description bottom-up and based on hard data alone. We call the approach actual process and the data actual events. Actual events can be measured automatically. This paper describes what has been done in this area already and what are the core problems to be solved in the future.

Hongbing Kou and Philip M. Johnson. Automated recognition of low-level process: A pilot validation study of Zorro for test-driven development. In Proceedings of the 2006 International Workshop on Software Process, Shanghai, China, May 2006. [ .pdf ]

Zorro is a system designed to automatically determine whether a developer is complying with the Test-Driven Development (TDD) process. Automated recognition of TDD could benefit the software engineering community in a variety of ways, from pedagogical aids to support the learning of test-driven design, to support for more rigorous empirical studies on the effectiveness of TDD in practice. This paper presents the Zorro system and the results of a pilot validation study, which shows that Zorro was able to recognize test-driven design episodes correctly 89% of the time. The results also indicate ways to improve Zorro's classification accuracy further, and provide evidence for the effectiveness of this approach to low-level software process recognition.

Qin Zhang. Improving Software Development Process and Product Management with Software Project Telemetry. Ph.D. thesis, University of Hawaii, Department of Information and Computer Sciences, December 2006. [ .pdf ]

Software development is slow, expensive and error prone, often resulting in products with a large number of defects which cause serious problems in usability, reliability, and performance. To combat this problem, software measurement provides a systematic and empirically-guided approach to control and improve software development processes and final products. However, due to the high cost associated with “metrics collection” and difficulties in “metrics decision-making,” measurement is not widely adopted by software organizations. This dissertation proposes a novel metrics-based program called “software project telemetry” to address the problems. It uses software sensors to collect metrics automatically and unobtrusively. It employs a domain-specific language to represent telemetry trends in software product and process metrics. Project management and process improvement decisions are made by detecting changes in telemetry trends and comparing trends between different periods of the same project. Software project telemetry avoids many problems inherent in traditional metrics models, such as the need to accumulate a historical project database and ensure that the historical data remain comparable to current and future projects. The claim of this dissertation is that software project telemetry provides an effective approach to (1) automated metrics collection and analysis, and (2) in-process, empirically-guided software development process problem detection and diagnosis. Two empirical studies were carried out to evaluate the claim: one in software engineering classes, and the other in the Collaborative Software Development Lab. The results suggested that software project telemetry had acceptably-low metrics collection and analysis overhead, and that it provided decision-making value at least in the exploratory context of the two studies.

Lorin Hochstein, Taiga Nakamura, Victor R. Basili, Sima Asgari, Marvin V. Zelkowitz, Jeffrey K. Hollingsworth, Forrest Shull, Jeffrey Carver, Martin Voelp, Nico Zazworka, and Philip M. Johnson. Experiments to understand HPC time to development. CTWatch Quarterly, November 2006. [ .pdf ]

In order to understand how high performance computing (HPC) programs are developed, a series of experiments, using students in graduate level HPC classes, have been conducted at many universities in the US. In this paper we discuss the general process of conducting those experiments, give some of the early results of those experiments, and describe a web-based process we are developing that will allow us to run additional experiments at other universities and laboratories that will be easier to conduct and generate results that more accurately reflect the process of building HPC programs.

Takuya Yamashita. Evaluation of Jupiter: A lightweight code review framework. M.S. Thesis CSDL-06-09, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2006. [ .pdf ]

Software engineers generally agree that code reviews reduce development costs and improve software quality by finding defects in the early stages of software development. In addition, code review software tools help the code review process by providing a more efficient means of collecting and analyzing code review data. On the other hand, software organizations that conduct code reviews often do not utilize these review tools. Instead, most organizations simply use paper or text editors to support their code review processes. Using paper or a text editor is potentially less useful than using a review tool for collecting and analyzing code review data. In this research, I attempted to address the problems of previous code review tools by creating a lightweight and flexible review tool. This review tool that I have developed, called "Jupiter", is an Eclipse IDE Plug-In. I believe the Jupiter Code Review Tool is more efficient at collecting and analyzing code review data than the text-based approaches. To investigate this hypothesis, I have constructed a methodology to compare the Jupiter Review Tool to the text-based review approaches. I carried out a case study using both approaches in a software engineering course with 19 students. The results provide some supporting evidence that Jupiter is more useful and more usable than the text-based code review, requires less overhead than the text-based review, and appears to support long-term adoption. The major contributions of this research are the Jupiter design philosophy, the Jupiter Code Review Tool, and the insights from the case study comparing the text-based review to the Jupiter-based review.

Hongbing Kou. Automated inference of software development behaviors: Design, implementation and validation of Zorro for test-driven development. Ph.D. Thesis Proposal CSDL-06-12, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, November 2006. [ .pdf ]

In my dissertation research, I propose to develop a systematic approach to automatically inferring software development behaviors using a technique I have developed called Software Development Stream Analysis (SDSA). Software Development Stream Analysis is a generic framework for inferring low-level software development behaviors. Zorro is an implementation of SDSA for Test-Driven Development (TDD). In addition, I designed a series of validation studies to test the SDSA framework by evaluating Zorro with respect to its capabilities to infer TDD development behaviors. An early pilot validation study found that Zorro works very well in practice, with Zorro recognizing the software development episodes of TDD with 88.4% accuracy. After this pilot study, I improved Zorro system's inferencing rules and evaluation mechanism as part of my collaborative research with Software Engineering Group at the National Research Council of Canada (NRC-CNRC). I am planning to conduct two more extended validation studies of Zorro in academic and industrial settings for Fall 2006 and Spring 2007.

Philip M. Johnson. Results from the 2006 classroom evaluation of Hackystat-UH. Technical Report CSDL-07-02, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2006. [ .html ]

This report presents the results from a classroom evaluation of Hackystat by ICS 413 and ICS 613 students at the end of Fall, 2006. The students had used Hackystat-UH for approximately six weeks at the time of the evaluation. The survey requests their feedback regarding the installation, configuration, overhead of use, usability, utility, and future use of the Hackystat-UH configuration. This classroom evaluation is a semi-replication of an evaluation performed on Hackystat by ICS 413 and 613 students at the end of Fall, 2003, which is reported in "Results from the 2003 Classroom Evaluation of Hackystat-UH". As the Hackystat system has changed significantly since 2003, some of the evaluation questions were changed. The data from this evaluation, in combination with the data from the 2003 evaluation, provide an interesting perspective on the past, present, and possible future of Hackystat. Hackystat has increased significantly in functionality since 2003, which has enabled the 2006 usage to more closely reflect industrial application, and which has resulted in significantly less overhead with respect to client-side installation. On the other hand, results appear to indicate that this increase in functionality has resulted in a decrease in the usability and utility of the system, due to inadequacies in the server-side user interface. Based upon the data, the report proposes a set of user interface enhancements to address the problems raised by the students, including Ajax-based menus and parameters, workflow based organization of the user interface, real-time display for ongoing project monitoring, annotations, and simplified data exploration facilities.

2005

Philip M. Johnson, Hongbing Kou, Michael G. Paulding, Qin Zhang, Aaron Kagawa, and Takuya Yamashita. Improving software development management through software project telemetry. IEEE Software, August 2005. [ .pdf ]

Software project telemetry is a new approach to software project management in which sensors are attached to development environment tools to unobtrusively monitor the process and products of development. This sensor data is abstracted into high-level perspectives on development trends called Telemetry Reports, which provide project members with insights useful for local, in-process decision making. This paper presents the essential characteristics of software project telemetry, contrasts it to other approaches such as predictive models based upon historical software project data, describes a reference framework implementation of software project telemetry called Hackystat, and presents our lessons learned so far.

Qin Zhang. Improving software development management with software project telemetry. Ph.D. Thesis Proposal CSDL-04-16, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, October 2005. [ .pdf ]

Software development is slow, expensive and error prone, often resulting in products with a large number of defects which cause serious problems in usability, reliability and performance. To combat this problem, software measurement provides a systematic and empirically-guided approach to control and improve development processes and final products. Experience has shown excellent results so long as measurement programs are conscientiously implemented and followed. However, due to the high cost associated with metrics collection and difficulties in metrics decision-making, many organizations fail to benefit from measurement programs. In this dissertation, I propose a new measurement approach - software project telemetry. It addresses the "metrics collection cost problem" through highly automated measurement machinery - sensors are used to collect metrics automatically and unobtrusively. It addresses the "metrics decision-making problem" through intuitive high-level visual perspectives on software development that support in-process, empirically-guided project management and process improvement. Unlike traditional metrics approaches which are primarily based on historical project databases and focused on model-based project comparison, software project telemetry emphasizes project dynamics and in-process control. It combines both the precision of traditional project management techniques and the flexibility promoted by agile community. The main claim of this dissertation is that software project telemetry provides an effective approach to (1) automated metrics collection, and (2) in-process, empirically-guided software development process problem detection and analysis. Three case studies will be conducted to evaluate the claim in different software development environments: (1) A pilot case study with student users in software engineering classes to (a) test drive the software project telemetry system in preparation for the next two full-scale case studies, and (b) gather the students' opinions when the adoption of the technology is mandated by their instructor. (2) A case study in CSDL to (a) use software project telemetry to investigate and improve its build process, and (b) evaluate the technology at the same time in CSDL (an environment typical of traditional software development with close collaboration and centralized decision-making). (3) A case study at Ikayzo with open-source project developers (geologically-dispersed volunteer work and decentralized decision-making) to gather their opinions about software project telemetry. The time frame of this research is as follows. The implementation of the software project telemetry system is complete and deployed. I have finished the first pilot case study. I will start both the second and third case studies from October 2005, and they will last 4 - 6 months. I wish to defend my research in May or August 2006 if everything goes according to plan.

Philip M. Johnson and Michael G. Paulding. Understanding HPCS development through automated process and product measurement with Hackystat. In Second Workshop on Productivity and Performance in High-End Computing (P-PHEC), February 2005. [ .pdf ]

The high performance computing (HPC) community is increasingly aware that traditional low-level, execution-time measures for assessing high-end computers, such as flops/second, are not adequate for understanding the actual productivity of such systems. In response, researchers and practitioners are exploring new measures and assessment procedures that take a more wholistic approach to high performance productivity. In this paper, we present an approach to understanding and assessing development-time aspects of HPC productivity. It involves the use of Hackystat for automatic, non-intrusive collection and analysis of six measures: Active Time, Most Active File, Command Line Invocations, Parallel and Serial Lines of Code, Milestone Test Success, and Performance. We illustrate the use and interpretation of these measures through a case study of small-scale HPC software development. Our results show that these measures provide useful insight into development-time productivity issues, and suggest promising additions to and enhancements of the existing measures.

Aaron Kagawa. Priority ranked inspection: Supporting effective inspection in resource-limited organizations. M.S. Thesis CSDL-05-01, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, August 2005. [ .pdf ]

Imagine that your project manager has budgeted 200 person-hours for the next month to inspect newly created source code. Unfortunately, in order to inspect all of the documents adequately, you estimate that it will take 400 person-hours. However, your manager refuses to increase the budgeted resources for the inspections. How do you decide which documents to inspect and which documents to skip? Unfortunately, the classic definition of inspection does not provide any advice on how to handle this situation. For example, the notion of entry criteria used in Software Inspection determines when documents are ready for inspection rather than if it is needed at all. My research has investigated how to prioritize inspection resources and apply them to areas of the system that need them more. It is commonly assumed that defects are not uniformly distributed across all documents in a system, a relatively small subset of a system accounts for a relatively large proportion of defects. If inspection resources are limited, then it will be more effective to identify and inspect the defect-prone areas. To accomplish this research, I have created an inspection process called Priority Ranked Inspection (PRI). PRI uses software product and development process measures to distinguish documents that are “more in need of inspection” (MINI) from those “less in need of inspection” (LINI). Some of the product and process measures include: user-reported defects, unit test coverage, active time, and number of changes. I hypothesize that the inspection of MINI documents will generate more defects with a higher severity than inspecting LINI documents. My research employed a very simple exploratory study, which includes inspecting MINI and LINI software code and checking to see if MINI code inspections generate more defects than LINI code inspections. The results of the study provide supporting evidence that MINI documents do contain more high-severity defects than LINI documents. In addition, there is some evidence that PRI can provide developers with more information to help determine what documents they should select for inspection.

Philip M. Johnson, Brian T. Pentland, Victor R. Basili, and Martha S. Feldman. Cedar - cyberinfrastructure for empirical data analysis and reuse. Technical Report CSDL-05-02, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2005. [ .pdf ]

This document presents the project description for a proposal to the National Science Foundation program on Next Generation Cybertools. It discusses an approach to integrating qualitative and quantitative empirical data, approaches to privacy policies, and data management issues to support collection, analysis, and dissemination of this data.

Hongbing Kou. Studying micro-processes in software development stream. Technical Report CSDL-05-03, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 2005. [ .pdf ]

In this paper we propose a new streaming technique to study software development. As we observed software development consists of a series of activities such as edit, compilation, testing, debug and deployment etc. All these activities contribute to development stream, which is a collection of software development activities in time order. Development stream can help us replay and reveal software development process at a later time without too much hassle. We developed a system called Zorro to generate and analyze development stream at Collaborative Software Development Laboratory in University of Hawaii. It is built on the top of Hackystat, an in-process automatic metric collection system developed in the CSDL. Hackystat sensors continuously collect development activities and send them to a centralized data store for processing. Zorro reads in all data of a project and constructs stream from them. Tokenizers are chained together to divide development stream into episodes (micro iteration) for classification with rule engine. In this paper we demonstrate the analysis on Test-Driven Development (TDD) with this framework.

Philip M. Johnson. A continuous, evidence-based approach to discovery and assessment of software engineering best practices. Technical Report CSDL-05-05, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, June 2005. [ .pdf ]

This document presents the project description for a proposal to the National Science Foundation. It discusses an approach that integrates Hackystat, Software Project Telemetry, Software Development Stream Analysis, Pattern Discovery, and Evidence-based software engineering to support evaluation of best practices. Both classroom and industrial case studies are proposed to support evaluation of the techniques.

Philip M. Johnson. Readings in empirical evaluation for budding software engineering researchers. Technical Report CSDL-05-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 2005. [ .html ]

Provides links to resources for empirical software engineering evaluation.

Philip M. Johnson. Telemetry plate lunch contest results. Technical Report CSDL-05-07, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 2005. [ .html ]

The "Telemetry Plate Lunch Contest" was a contest to support investigation of the use of multi-axis telemetry charts in Hackystat. This document describes the winning submissions.

Christoph Lofi. Continuous GQM: An automated framework for the goal-question-metric paradigm. M.S. Thesis CSDL-05-09, Department of Software Engineering, Fachbereich Informatik, Universitat Kaiserslautern, Germany, August 2005. [ .pdf ]

Measurement is an important aspect of Software Engineering as it is the foundation of predictable and controllable software project execution. Measurement is essential for assessing actual project progress, establishing baselines and validating the effects of improvement or controlling actions. The work performed in this thesis is based on Hackystat, a fully automated measurement framework for software engineering processes and products. Hackystat is designed to unobtrusively measure a wide range of metrics relevant to software development and collect them in a centralized data repository. Unfortunately, it is not easy to interpret, analyze and visualize the vast data collected by Hackystat in such way that it can effectively be used for software project control. A potential solution to that problem is to integrate Hackystat with the GQM (Goal / Question / Metric) Paradigm, a popular approach for goal-oriented, systematic definition of measurement programs for software-engineering processes and products. This integration should allow the goal-oriented use of the metric data collected by Hackystat and increase its usefulness for project control. During the course of this work, this extension to Hackystat which is later called hackyCGQM is implemented. As a result, hackyCGQM enables Hackystat to be used as a Software Project Control Center (SPCC) by providing purposeful high-level representations of the measurement data. Another interesting side-effect of the combination of Hackystat and hackyCGQM is that this system is able to perform fully automated measurement and analysis cycles. This leads to the development of cGQM, a specialized method for fully automated, GQM based measurement programs. As a summary, hackyCGQM seeks to implement a completely automated GQMbased measurement framework. This high degree of automation is made possible by limiting the implemented measurement programs to metrics which can be measured automatically, thus sacrificing the ability to use arbitrary metrics.

2004

Philip M. Johnson and Joy M. Agustin. Keeping the coverage green: Investigating the cost and quality of testing in agile development. In Submitted to the 2004 Conference on Software Metrics, Chicago, Illinois, August 2004. [ .pdf ]

An essential component of agile methods such as Extreme Programming is a suite of test cases that is incrementally built and maintained throughout development. This paper presents research exploring two questions regarding testing in these agile contexts. First, is there a way to validate the quality of test case suites in a manner compatible with agile development methods? Second, is there a way to assess and monitor the costs of agile test case development and maintenance? In this paper, we present the results of our recent research on these issues. Our results include a measure called XC (for Extreme Coverage) which is implemented in a system called JBlanket. XC is designed to support validation of the test-driven design methodology used in agile development. We describe how XC and JBlanket differ from other coverage measures and tools, assess their feasibility through a case study in a classroom setting, assess its external validity on a set of open source systems, and illustrate how to incorporate XC into a more global measure of testing cost and quality called Unit Test Dynamics (UTD). We conclude with suggested research directions building upon these findings to improve agile methods and tools.

Philip M. Johnson, Hongbing Kou, Joy M. Agustin, Qin Zhang, Aaron Kagawa, and Takuya Yamashita. Practical automated process and product metric collection and analysis in a classroom setting: Lessons learned from Hackystat-UH. In Proceedings of the 2004 International Symposium on Empirical Software Engineering, Los Angeles, California, August 2004. [ .pdf ]

Measurement definition, collection, and analysis is an essential component of high quality software engineering practice, and is thus an essential component of the software engineering curriculum. However, providing students with practical experience with measurement in a classroom setting can be so time-consuming and intrusive that it's counter-productive-teaching students that software measurement is “impractical” for many software development contexts. In this research, we designed and evaluated a very low-overhead approach to measurement collection and analysis using the Hackystat system with special features for classroom use. We deployed this system in two software engineering classes at the University of Hawaii during Fall, 2003, and collected quantitative and qualitative data to evaluate the effectiveness of the approach. Results indicate that the approach represents substantial progress toward practical, automated metrics collection and analysis, though issues relating to the complexity of installation and privacy of user data remain.

Aaron Kagawa and Philip M. Johnson. The Hackystat-JPL configuration: Round 2 results. Technical Report CSDL-03-07, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2004. [ .html ]

This report presents selected round two results from Hackystat-based descriptive analyses of Harvest workflow data gathered from the Mission Data System software development project from January, 2003 to December, 2003. The information provided in this report describes improvements and differences made since the time of writing of the previous techreport (The Hackystat-JPL Configuration: Overview and Initial Results.

Stuart Faulk, John Gustafson, Philip M. Johnson, Adam A. Porter, Walter Tichy, and Larry Votta. Toward accurate HPC productivity measurement. In Proceedings of the First International Workshop on Software Engineering for High Performance Computing System Applications, Edinburgh, Scotland, May 2004. [ .pdf ]

One key to improving high-performance computing (HPC) productivity is finding better ways to measure it. We define productivity in terms of mission goals, i.e., greater productivity means that more science is accomplished with less cost and effort. Traditional software productivity metrics and computing benchmarks have proven inadequate for assessing or predicting such end-to-end productivity. In this paper we describe a new approach to measuring productivity in HPC applications that addresses both development time and execution time. Our goal is to develop a public repository of effective productivity benchmarks that anyone in the HPC community can apply to assess or predict productivity.

Stuart Faulk, Philip M. Johnson, John Gustafson, Adam A. Porter, Walter Tichy, and Larry Votta. Measuring HPC productivity. International Journal of High Performance Computing Applications, December 2004. [ .pdf ]

One key to improving high-performance computing (HPC) productivity is finding better ways to measure it. We define productivity in terms of mission goals, i.e., greater productivity means that more science is accomplished with less cost and effort. Traditional software productivity metrics and computing benchmarks have proven inadequate for assessing or predicting such end-to-end productivity. In this paper we introduce a new approach to measuring productivity in HPC applications that addresses both development time and execution time. Our goal is to develop a public repository of effective productivity benchmarks that anyone in the HPC community can apply to assess or predict productivity.

Philip M. Johnson. Proceedings of the first hackystat developer boot camp. Technical report, University of Hawaii, May 2004. [ .pdf ]

Aaron Kagawa. Hackystat MDS supporting MSL MMR. Technical Report CSDL-04-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, June 2004. [ .html ]

This report presents selected results from Hackystat Analyses on Mission Data System's Release 9. The goal is to identify reports of use to the Monthly Management Report for Mars Science Laboratory.

Aaron Kagawa. Hackystat mds supporting msl mmr: Round 2 results. Technical Report CSDL-04-07, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 2004. [ .html ]

This report presents selected additional results from Hackystat Analyses on Mission Data System's Release 9. The goal is to identify reports of use to the Monthly Management Report for Mars Science Laboratory.

Aaron Kagawa. Hackystat-sqi: Modeling different development processes. Technical Report CSDL-04-09, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 2004. [ .html ]

This report presents the design of a Hackystat module called SQI, whose purpose is to support quality analysis for multiple projects at Jet Propulsion Laboratory.

Aaron Kagawa. Hackystat-sqi: First progress report. Technical Report CSDL-04-10, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 2004. [ .html ]

This report presents the initial analysis that are available for Hackystat-SQI and future directions.

Michael G. Paulding. Measuring the processes and products of HPCS development: Initial results for the optimal truss purpose-based benchmark. Technical Report CSDL-04-13, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, September 2004. [ .html ]

This report presents initial results from the in-progress implementation of the Optimal Truss Purpose-based benchmark. It shows process and product data collected both automatically by Hackystat and manually by engineering logs and other tools. It presents some interpretations of the data and proposes approaches to improving support for understanding how to improve HPCS development productivity.

2003

Philip M. Johnson, Mette L. Moffett, and Brian T. Pentland. Lessons learned from VCommerce: A virtual environment for interdisciplinary learning about software entrepreneurship. Communications of the ACM, 46(12), December 2003. [ .pdf ]

The Virtual Commerce (VCommerce) simulation environment provides a framework within which students can develop internet-based businesses. Unlike most entrepreneurship simulation games, the objective of VCommerce is not to maximize profits. The environment, which is designed for use in interdisciplinary classroom settings, provides an opportunity for students with different backgrounds to build (virtual) businesses together. Elements of VCommerce, such as its open-ended business model and product; significant technical depth; external players; and severe time constraints combine to create a surprisingly realistic and effective learning experience for students in both computer science and management. This article overviews the VCommerce environment and our lessons learned from using it at both the University of Hawaii and Michigan State University.

Philip M. Johnson, Hongbing Kou, Joy M. Agustin, Christopher Chan, Carleton A. Moore, Jitender Miglani, Shenyan Zhen, and William E. Doane. Beyond the personal software process: Metrics collection and analysis for the differently disciplined. In Proceedings of the 2003 International Conference on Software Engineering, Portland, Oregon, May 2003. [ .pdf ]

Pedagogies such as the Personal Software Process (PSP) shift metrics definition, collection, and analysis from the organizational level to the individual level. While case study research indicates that the PSP can provide software engineering students with empirical support for improving estimation and quality assurance, there is little evidence that many students continue to use the PSP when no longer required to do so. Our research suggests that this “PSP adoption problem” may be due to two problems: the high overhead of PSP-style metrics collection and analysis, and the requirement that PSP users “context switch” between product development and process recording. This paper overviews our initial PSP experiences, our first attempt to solve the PSP adoption problem with the LEAP system, and our current approach called Hackystat. This approach fully automates both data collection and analysis, which eliminates overhead and context switching. However, Hackystat changes the kind of metrics data that is collected, and introduces new privacy-related adoption issues of its own.

Joy M. Agustin. Improving software quality through extreme coverage with JBlanket. M.S. Thesis CSDL-02-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2003. [ .pdf ]

Unit testing is an important part of software testing that aids in the discovery of bugs sooner in the software development process. Extreme Programming (XP), and its Test First Design technique, relies so heavily upon unit tests that the first code implemented is made up entirely of test cases. Furthermore, XP considers a feature to be completely coded only when all of its test cases pass. However, passing all test cases does not necessarily mean the test cases are good. <p> Extreme Coverage (XC) is a new approach that helps to assess and improve the quality of software by enhancing unit testing. It extends the XP requirement that all test cases must pass with the requirement that all defect-prone testable methods must be invoked by the tests. Furthermore, a set of flexible rules are applied to XC to make it as attractive and light-weight as unit testing is in XP. One example rule is to exclude all methods containing one line of code from analysis. I designed and implemented a new tool, called JBlanket, that automates the XC measurement process similar to the way that JUnit automates unit testing. JBlanket produces HTML reports similar to JUnit reports which inform the user about which methods need to be tested next. <p> In this research, I explore the feasibility of JBlanket, the amount of effort needed to reach and maintain XC, and the impact that knowledge of XC has on system implementation through deployment and evaluation in an academic environment. Results show that most students find JBlanket to be a useful tool in developing their test cases, and that knowledge of XC did influence the manner in which students implemented their systems. However, more studies are needed to conclude precisely how much effort is needed to reach and maintain XC. <p> This research lays the foundation for future research directions. One direction involves increasing its flexibility and value by expanding and refining the rules of XC. Another direction involves tracking XC behavior to find out when it is and is not applicable.

Aaron Kagawa. The design, implementation, and evaluation of CLEW: An improved collegiate department website. B.S. Thesis CSDL-03-03, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2003. [ .pdf ]

The purpose of a collegiate department website is to provide prospective students, current students, faculty, staff, and other academic and industry professionals with information concerning the department. The information presented on the website should give the user an accurate model of the department, even as it changes overtime. Some of these changes include: adding new faculty members, new students, new courses, etc. The more accurately the website models the department, the more aware the website's users will be of the department. Traditional collegiate department websites have two primary problems in creating an accurate model of their department. First, only a few people, usually the department webmasters, can add information to the website. Second, it is difficult to enable website users to be informed of changes to the website that might be of interest to them. These two problems decrease the accuracy of the model and hamper its effectiveness in alerting users of changes to the website. As a result, user awareness of the department is also decreased. The Collaborative Educational Website (CLEW) is a Java web application intended to support accurate modeling of a collegiate department. CLEW is designed to solve the traditional collegiate department website's two main problems. First, it provides interactive services which will allow users to add various kinds of information to the website. Secondly, CLEW addresses the notification problem by providing tailored email notifications of changes to the website. CLEW was developed by a Software Engineering class in the Information and Computer Science Department at the University of Hawaii at Manoa. My role in this development as project leader is to design and implement the framework for the system. CLEW currently contains approximately 28,000 lines of Java code and it contains upwards of 500 web pages. In the Spring 2003 semester, CLEW replaced the existing Information and Computer Science Department website. I evaluated CLEW to measure its effectiveness as a model of the department using a pre and post release questionnaire. I also evaluated usage data of the CLEW System to assess the functionality provided by CLEW. If CLEW provides a more accurate model of a collegiate department, then the next step is to provide the CLEW framework to other collegiate departments worldwide. It is my hope that the users' of CLEW will get a clue about their department!

Philip M. Johnson. Hackystat metric collection and analysis for the MDS harvest CM system: A design specification. Technical Report CSDL-03-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, August 2003. [ .html ]

This proposal describes the requirements and top-level design for a Hackystat-based system that automatically monitors and analyzes the MDS development process using data collected from the Harvest CM system.

Philip M. Johnson. The Hackystat-JPL configuration: Overview and initial results. Technical Report CSDL-03-07, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, October 2003. [ .html ]

This report presents selected initial results from Hackystat-based descriptive analyses of Harvest workflow data gathered from the Mission Data System software development project from January, 2003 to August, 2003. We present the motivation for this work, the methods used, examples of the analyses, and questions raised by the results. Our major findings include: (a) workflow transitions not documented in the "official" process; (b) significant numbers of packages with unexpected transition sequences; (c) cyclical levels of development "intensity" as represented by levels of promotion/demotion; (d) a possible approach to calculating the proportion of "new" scheduled work versus rework/unscheduled work along with baseline values; and (e) a possible approach to calculating the distribution of package "ages" and days spent in the various workflow states, along with potential issues with the representation of "package age" based upon the current approach to package promotion. The report illustrates how our current approach to analysis can yield descriptive perspectives on the MDS development process. It provides a first step toward more prescriptive, analytic models of the MDS software development process by providing insights into the potential uses and limitations of MDS product and process data.

Philip M. Johnson. The review game: Teaching asynchronous distributed software review using eclipse. Technical Report CSDL-03-09, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, November 2003. [ .html ]

Presents an approach to teaching software review involving an Eclipse plug-in called Jupiter and automated metrics collection and analysis using Hackystat.

Takuya Yamashita. Jupiter users guide. Technical Report CSDL-03-11, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2003. [ .html ]

Provides a users guide for the Jupiter code review plug-in for Eclipse.

Philip M. Johnson. Results from the 2003 classroom evaluation of Hackystat-UH. Technical Report CSDL-03-13, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2003. [ .html ]

This report presents the results from a qualitative evaluation of ICS 413 and ICS 613 students at the end of Fall, 2003. The students had used Hackystat-UH for approximately six weeks at the time of the evaluation. The survey requests their feedback regarding the installation, configuration, overhead of use, usability, utility, and future use of the Hackystat-UH configuration. Results provide evidence that: (1) Significant problems occur during installation and configuration of the system; (2) the Hackystat-UH configuration incurs very low overhead after completing installation and configuration; (3) Analyses were generally found to be somewhat useful and usable; and (4) feasibility in a professional development context requires addressing privacy and platform issues.

2002

Jitender Miglani. The design, implementation, and evaluation of INCA: an automated system for approval code allocation. M.S. Thesis CSDL-01-11, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2002. [ .pdf ]

The ICS department of the University of Hawaii has faced problems surrounding approval code distribution as its enrollment has increased. The manual system for approval code allocation was time-consuming, ineffective and inefficient. INCA is designed to automate the task of approval code allocation, improve the quality of course approval decisions, and decrease the administrative overhead involved in those decisions. Based upon informal feedback from department administrators, it appears that INCA reduces their overhead and makes their life easier. What are the old problems that are solved by INCA? Does INCA introduce new kinds of problems for the administrator? What about the students? Are they completely satisfied with the system? In what ways does the system benefit the department as a whole? This thesis discusses the design, implementation and evaluation of INCA. It evaluates INCA from the viewpoint of the administrator, the students, and the department. An analysis of emails received at uhmics@hawaii.edu account indicates that INCA reduces administrative overhead. The results of the user survey show that three quarters of students believe INCA improved their course approval predictability and course requirements understandability. They prefer INCA to old method of requesting approval codes by email. INCA database analysis provided course demand information and student statistics useful for departments. This evaluation of INCA from three different perspectives provides useful insights for future improvement of INCA and for improving the student experience with academic systems in general.

Bill Giebink. Bringing the faulkes telescope to classrooms in hawaii. M.S. Thesis Proposal CSDL-02-01, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, March 2002. [ http ]

The Faulkes Telescope (FT), currently under construction on the summit of Haleakala, Maui, Hawaii, will provide data from celestial observations to schools in the United Kingdom and Hawaii. This project, with its unique goal of building a telescope to be used exclusively for educational purposes, is a joint venture between groups in the United Kingdom and Hawaii. Teachers and students will be able to download data that has been collected by the telescope on a previous occasion or sign up to have the telescope collect data at a specific time for them. Current plans call for data from the telescope to be delivered to classrooms in the form of raw data files and images from processed raw data files. In addition to sharing use of the telescope, part of the agreement between the UK and Hawaii groups provides for the UK group to share all software developed for the project with the Hawaii group. However, though a system for transporting images to schools is being developed for the UK side, at present there is no corresponding system for Hawaii. Also, at this point neither the British nor Hawaii sides have a definite system for storing and transporting raw data files. A first step, therefore, toward making the FT useful for students and teachers in Hawaii is to develop a plan for a complete system to archive and transport telescope data. It is anticipated that a plan for this system will include: 1) a specification of the required hardware components, 2) a description of how data will move in and out of the system, 3) a definition of the data pathway within the system, and 4) a description of the data storage requirements (i.e. database). The development of each of the components of the system will consist of a discussion of available options followed by a suggestion of the best choice of action. Development of this system is anticipated to be the topic for a directed reading/research project to be undertaken during spring, 2002. After the system has been clearly defined there are some additional questions to be answered. Among the more interesting aspects is the question of how to present data from the telescope in the most useful and effective manner to teachers and students.

Weifeng Miao. J2EEVAL: A method for performance analysis of enterprise javabean applications. M.S. Thesis CSDL-02-02, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, August 2002.

J2EEval is a method for performance analysis of Enterprise JavaBean (EJB) applications. This thesis overviews the method and its application in the context of a case study of the Inca Course approval system.

Philip M. Johnson. Improving the dependability and predictability of jpl/mds software through low-overhead validation of software process and product metrics. Technical Report CSDL-02-03, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2002. [ http ]

This white paper presents information regarding a proposed collaboration between the Collaborative Software Development Laboratory, the Mission Data Systems group at Jet Propulsion Laboratory, and the Center for Software Engineering at University of Southern California. The proposed collaboration would be funded through grants from the NSF/NASA Highly Dependable Computing and Communication Systems Research (HDCCSR) program.

Joy M. Agustin, William M. Albritton, and Nolan Y. Kiddo. Virtual mall management software. Technical Report CSDL-02-04, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2002. [ .pdf ]

Presents a business plan for commercialization of the Vendor Relationship Management (VRM) system.

Philip M. Johnson. Supporting development of highly dependable software through continous, automated, in-process, and individualized software measurement validation. Technical Report CSDL-02-05, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 2002. [ .pdf ]

Highly dependable software is, by nature, predictable. For example, one can predict with confidence the circumstances under which the software will work and the circumstances under which it will fail. Empirically-based approaches to creating predictable software are based on two assumptions: (1) historical data can be used to develop and calibrate models that generate empirical predictions, and (2) there exists relationships between internal attributes of the software (i.e. immediately measurable process and product attributes such as size, effort, defects, complexity, and so forth) and external attributes of the software (i.e. abstract and/or non-immediately measurable attributes, such as `quality', the time and circumstances of a specific component's failure in the field, and so forth). Software measurement validation is the process of determining a predictive relationship between available internal attributes and correspondingly useful external attributes and the conditions under which this relationship holds. <p> This report proposes research whose general objective is to design, implement, and validate software measures within a development infrastructure that supports the development of highly dependable software systems. The measures and infrastructure are designed to support dependable software development in two ways: (1) They will support identification of modules at risk for being fault-prone, enabling more efficient and effective allocation of quality assurance resources, and (2) They will support incremental software development through continuous monitoring, notifications, and analyses. Empirical assessment of these methods and measures during use on the Mission Data System project at Jet Propulsion Laboratory will advance the theory and practice of dependable computing and software measurement validation and provide new insight into the technological and methodological problems associated with the current state of the art.

Joy M. Agustin. Jblanket: Support for extreme coverage in java unit testing. Technical Report CSDL-02-08, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2002. [ .pdf ]

Unit testing is a tool commonly used to ensure reliability in software development and can be applied to the software development process as soon as core functionality of a program is implemented. In conventional unit testing, to properly design unit tests programmers will need to have access to specifications and source code. However, this is not possible in Extreme Programming (XP), where tests are created before any feature of a system is ever implemented. Obviously, XP's approach does not lead to improper testing, but instead leads to a different approach for testing. JBlanket is a tool developed in the Collaborative Software Development Laboratory (CSDL) at the University of Hawai'i (UH) that is meant to assist these types of "unconventional" testing that calculates method-level coverage in Java programs, a coarse enough granularity of test case coverage whereby programmers will not only be able to ensure that all of their unit tests pass, but will also be able to test all of their currently implemented methods. Unit testing where 100 percent of all unit tests must pass that also exercises 100 percent of all non-trivial remaining implemented methods is called Extreme Coverage. This research will attempt to show that Extreme Coverage is useful in developing quality software.

Hongbing Kou and Xiangli Xu. Most active file measurement in Hackystat. Technical Report CSDL-02-09, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2002. [ .pdf ]

Hackystat, an automated metric collection and analysis tool, adopts the "Most Active File" measurement in five-minute time chunks to represent the developers' effort. This measurement is validated internally in this report. The results show that big time chunk sizes are highly linear regressive with the standard time chunk size (1 minute). The percentage of missed effort to total effort is very low with five minutes chunk size. And the relative ranking with respect to the effort of the active files is only slightly changed.

Christoph Aschwanden and Aaron Kagawa. Comparing personal project metrics to support process and product improvement. Technical Report CSDL-02-10, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2002. [ .pdf ]

Writing high quality software with a minimum of effort is an important thing to learn. Various personal metric collection processes exist, such as PSP and Hackystat. However, using the personal metric collection processes gives only a partial indication of how a programmer stands amongst his peers. Personal metrics vary greatly amongst programmers and it is not always clear what is the "correct" way to develop software. This paper compares personal programming characteristics of students in a class environment. Metrics, such as CK Metrics, have been analyzed and compared against a set of similar students in an attempt to find the correct or accepted value for these metrics. It is our belief that programmers can gain much, if not, more information from comparing their personal metrics against other programmers. Preliminary results show that people with more experience in programming produce different metrics than those with less.

Cliff Tomosada and Burt Leung. Configuration management and Hackystat: Initial steps to relating organizational and individual development. Technical Report CSDL-02-11, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2002. [ .pdf ]

Hackystat is a software development metrics collection tool that focuses on individual developers. Hackystat is able to provide a developer with a personal analysis of his or her unique processes. Source code configuration management (SCM) systems, on the other hand, are a means of storage for source code in a development community and serve as controller for what each individual may contribute to the community. We created a Hackystat sensor for CVS (an SCM system) in the hopes of bridging the gap between these two very different, yet related software applications. It was our hope to use the data we collected to address the issue of development conflicts that often arise in organizational development environments. We found, however, that neither application, Hackystat or CVS, could be easily reconfigured to our needs.

2001

Jitender Miglani. The design, implementation, and evaluation of INCA: a proposal for an automated system for approval code allocation. M.S. Thesis Proposal CSDL-01-11, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, September 2001. [ .pdf ]

The ICS department of the University of Hawaii has faced problems surrounding approval code distribution as its enrollment has increased. The manual system for approval code allocation was time-consuming, ineffective and inefficient. INCA is designed to automate the task of approval code allocation, improve the quality of course approval decisions, and decrease the administrative overhead involved in those decisions. Based upon informal feedback from department administrators, it appears that INCA reduces their overhead and makes their life easier. What are the old problems that are solved by INCA? Does INCA introduce new kinds of problems for the administrator? What about the students? Are they completely satisfied with the system? In what ways does the system benefit the department as a whole? In this thesis, I will discuss design, implementation and evaluation of INCA. I will evaluate INCA from the viewpoints of students, administrators, and the department. I will do an email analysis to prove that INCA reduces the administrative overheads. I will conduct a user survey to investigate whether INCA improves the predictability and understandability of students. Finally, I will analyze the INCA database to extract the information useful to the departments for course curriculum planning. The evaluation of INCA will provide us with useful insights for future improvements of INCA and improving the student experience with academic systems in general.

Mette L. Moffett, Brian T. Pentland, and Philip M. Johnson. Vcommerce administrator guide. Technical Report CSDL-00-13, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 2001. [ .pdf ]

Provides administrative support for installation, configuration, and running the VCommerce simulation.

Mark F. Waterson. The hardware subroutine approach to developing custom co-processors. M.S. Thesis CSDL-01-01, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2001. [ .pdf ]

The Hardware Subroutine Approach to developing a reconfigurable, custom co-processor is an architecture and a process for implementing a hardware subsystem as a direct replacement for a subroutine in a larger program. The approach provides a framework that helps the developer analyze the tradeoffs of using hardware acceleration, and a design procedure to guide the implementation process. To illustrate the design process a HWS implementation of a derivative estimation subroutine is described. In this context I show how key performance parameters of the HWS can be estimated in advance of complete implementation and decisions made regarding the potential benefit of implementation alternatives to program performance improvement. Performance of the actual hardware coprocessor is compared to the software-only implementation and to estimates developed during the design process.

Philip M. Johnson, Carleton A. Moore, and Jitender Miglani. Hackystat design notes. Technical Report CSDL-01-04, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, June 2001. [ .html ]

This document collects together a series of design notes concerning Hackystat, a system for automated collection and analysis of software engineering data. Some of the design notes include: Insights from the Presto Development Project: Requirements from the IDE for automated data collection; A roundtable discussion of Hackystat; Change management in Hackystat; Validated idle time detection; and Defect collection and analysis in Hackystat.

Philip M. Johnson. Hackystat developer release installation guide. Technical Report CSDL-01-05, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, June 2001. [ .html ]

This document provides an overview of the Hackystat developer distribution. This includes the structure of the source code, the Java-based component technologies Hackystat is built on (including Tomcat, Ant, Soap, Xerces, Cocoon, JavaMail, JUnit, HttpUnit, JDOM, and Jato), configuration instructions, testing, and frequently asked questions. An updated version of this document is provided in the actual developer release package; this technical report is intended to provide easy access to near-current instructions for those who are evaluating the system and would like to learn more before downloading the entire package.

Jitender Miglani. Inca business plan. Technical Report CSDL-01-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, April 2001. [ .pdf ]

Inca is an Enterprise JavaBean based technology to provide Internet-based allocation of course approval codes. This business plan explores the commercial potential of this technology. The Inca business plan was selected as a finalist in the 2001 Business Plan Competition of the University of Hawaii College of Business Administration.

Michael J. Staver. Lightweight disaster management training and control. M.S. Thesis Proposal CSDL-01-07, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 2001. [ .pdf ]

Disaster management is increasingly a global enterprise for international organizations, governmental institutions, and arguably individuals. The tempo at which information is collected and disseminated during natural and man-made disasters paces the rate and effectiveness of relief efforts. As the Internet becomes a ubiquitous platform for sharing information, a browser-based application can provide disaster managers a lightweight solution for training and control. A heavyweight solution might include dedicated communications, real-time command and control software and hardware configurations, and dedicated personnel. In contrast, a lightweight solution requires trained personnel with Internet access to a server via computers or hand-held devices. Tsunami Sim provides asynchronous situational awareness with an interactive, Geographic Information System (GIS). Tsunami Sim is not capable of providing real-time situational awareness nor intended to replace or compete with heavyweight solutions developed for that purpose. Rather, Tsunami Sim will enhance the disaster managers' abilities to train for and control disasters in regions where heavyweight solutions are impractical. For distributed training, Tsunami Sim will provide deterministic and stochastic scenarios of historical and fictional disasters. Tsunami Sim will be an open-source, Java application implemented for maintainability and extensibility. United States Pacific Command (PACOM) located at Camp Smith, Hawai'i, will enable Tsunami Sim validation and assessment.

Weifeng Miao. J2eeval: A method for performance analysis of enterprise javabean applications. M.S. Thesis Proposal CSDL-01-08, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 2001. [ .pdf ]

J2EEval is a method for performance analysis of Enterprise JavaBean (EJB) applications. This proposal overviews the method and its application in the context of a case study of the Inca Course approval system.

Philip M. Johnson. Inca software requirements specification. Technical Report CSDL-01-09, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, April 2001. [ .html ]

Inca is a system designed to improve the efficiency and effectiveness of course approval request processing. This software requirements specification details: (a) the traditional manual process used by the ICS department for course approval request processing, (b) the 12 basic requirements Inca must satisfy, the fine-grained rules for prioritization of requests, (c) several usage scenarios, (d) n-tier architectural issues for an Enterprise JavaBeans implementation, and (e) miscellaneous requirements including authentication, data file formats, special topics, and so forth.

Joy M. Agustin and William M. Albritton. Vendor relationship management: Re-engineering the business process through B2B infrastructure to accelerate the growth of small businesses in geographically isolated areas. Technical Report CSDL-01-10, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2001. [ .pdf ]

Instead of limiting the business to the local populace, the World Wide Web gives global access to all companies that have made the transition to online. Ideally, the Internet seems to offer vast, untapped markets, lowers the costs of reaching these markets, and frees businesses from geographical constraints. Applying this to Hawaii, small companies can now sell their products in the expanding global marketplace, instead of restricting themselves to an island economy. The goal of research on the Vendor Relationship Management (VRM) System is to explore the requirements for new business process models and associated technological infrastructure for small local businesses in Hawaii that wish to exploit the global reach of the Internet. In order to understand the requirements and potential of this approach, we had meetings with different groups of people that included the host of a virtual mall, a financial service provider, two courier services, and several local companies. The interface of the VRM system includes both a vendor and a host side. The host side is used by the virtual mall company to send customers orders to the various vendors. It can also be used to create and edit vendor company information, create and edit vendor product information, and enter a contact email address. The vendor side is used by the numerous vendors to receive the orders, confirm that the orders have been sent, view the customer information, create and edit product information, and create and edit contact information. After creating the first prototype, several experts gave their critiques of the system. Based on their critiques, we came up with several possible directions for future research.

Philip M. Johnson. You can't even ask them to push a button: Toward ubiquitous, developer-centric, empirical software engineering. In The NSF Workshop for New Visions for Software Design and Productivity: Research and Applications, Nashville, TN, December 2001. [ .pdf ]

Collection and analysis of empirical software project data is central to modern techniques for improving software quality, programmer productivity, and the economics of software project development. Unfortunately, barriers surrounding the cost, quality, and utility of empirical project data hamper effective collection and application in many software development organizations. This paper describes Hackystat, an approach to enabling ubiquitous collection and analysis of empirical software project data. The approach rests on three design criteria: data collection and analysis must be developer-centric rather than management-centric; it must be in-process rather than between-process, and it must be non-disruptive-it must not require developers to interrupt their activities to collect and/or analyze data. Hackystat is being implemented via an open source, sensor and web service based architecture. After a developer instruments their commercial development environment tools (such as their compiler, editor, version control system, and so forth) with Hackystat sensors, data is silently and unobtrusively collected and sent to a centralized web service. The web service runs analysis mechanisms over the data and sends email notifications back to a developer when “interesting” changes in their process or product occur. Our research so far has yielded an initial operational release in daily use with a small set of sensors and analysis mechanisms, and a research agenda for expansion in the tools, the sensor data types, and the analyses. Our research has also identified several critical technical and social barriers, including: the fidelity of the sensors; the coverage of the sensors; the APIs exposed by commercial tools for instrumentation; and the security and privacy considerations required to avoid adoption problems due to the spectre of “Big Brother”.

Philip M. Johnson. Project hackystat: Accelerating adoption of empirically guided software development through non-disruptive, developer-centric, in-process data collection and analysis. Technical report, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, November 2001. [ .pdf ]

Collection and analysis of empirical software project data is central to modern techniques for improving software quality, programmer productivity, and the economics of software project development. Unfortunately, effective collection and analysis of software project data is rare in mainstream software development. Prior research suggests that three primary barriers are: (1) cost: gathering empirical software engineering project data is frequently expensive in resources and time; (2) quality: it is often difficult to validate the accuracy of the data; and (3) utility: many metrics programs succeed in collecting data but fail to make that data useful to developers. This report describes Hackystat, a technology initiative and research project that explores the strengths and weaknesses of a developer-centric, in-process, and non-disruptive approach to validation of empirical software project data collection and analysis.

Timothy Burgess. An artificial neural network for recognition of simulated dolphin whistles. M.S. Thesis CSDL-01-14, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2001. [ .pdf ]

It is known that dolphins are capable of understanding 200 "word" vocabularies with sentence complexity of three or more "words", where words consist of audio tones or hand gestures. An automated recognition method of words where a word is a defined whistle, within a predetermined acceptable degree of variance, could allow words to be both easily reproducible by dolphins and identifiable by humans. We investigate a neural network to attempt to distinguish four artificially generated whistles from themselves and from common underwater environmental noises, where a whistle consists of four variations of a fundamental whistle style. We play these whistle variations into the dolphins normal tank environment and then record from a separate tank hydrophone. This results in slight differences for each whistle variation's spectrogram, the complete collection of which we use to form the neural network training set. For a single whistle variation, the neural network demonstrates strong output node values, greater than 0.9 on a scale of 0 to 1. However, for combinations of "words", the network exhibits poor training performance and an inability to distinguish between words. To validate this, we used a test set of 41 examples, of which only 22 were correctly classified. This result suggests that an appropriately trained backpropagation neural network using spectrographic analysis as inputs is a viable means for a very specific whistle recognition, however a large degree of whistle variation will dramatically lower the performance of the network, past that required for acceptable recognition.

2000

Robert S. Brewer. Improving mailing list archives through condensation. M.S. thesis, University of Hawaii, March 2000. [ .pdf ]

Searching the archives of electronic product support mailing lists often provides unsatisfactory results for users looking for quick solutions to their problems. Archives are inconvenient because they are too voluminous, lack efficient searching mechanisms, and retain the original thread structure which is not relevant to knowledge seekers. I present MCS, a system which improves mailing list archives through condensation. Condensation involves omitting redundant or useless messages, and adding meta-level information to messages to improve searching. The condensation process is performed by a human assisted by an editing tool. I describe the design and implementation of MCS, and compare it to related systems. I also present my experiences condensing a 1428 message mailing list archive to an archive containing only 177 messages (an 88% reduction). The condensation required only 1.5 minutes of editor effort per message. The condensed archive was adopted by the users of the mailing list.

Robert S. Brewer. Improving problem-oriented mailing list archives with MCS. In Proceedings of the 2000 International Conference on Software Engineering, Limerick, Ireland, June 2000. [ .pdf ]

Developers often use electronic mailing lists when seeking assistance with a particular software application. The archives of these mailing lists provide a rich repository of problem-solving knowledge. Developers seeking a quick answer to a problem find these archives inconvenient, because they lack efficient searching mechanisms, and retain the structure of the original conversational threads which are rarely relevant to the knowledge seeker. We present a system called MCS which improves mailing list archives through a process called condensation. Condensation involves several tasks: extracting only messages of longer-term relevance, adding metadata to those messages to improve searching, and potentially editing the content of the messages when appropriate to clarify. The condensation process is performed by a human editor (assisted by a tool), rather than by an artificial intelligence (AI) system. We describe the design and implementation of MCS, and compare it to related systems. We also present our experiences condensing a 1428 message mailing list archive to an archive containing only 177 messages (an 88% reduction). The condensation required only 1.5 minutes of editor effort per message. The condensed archive was adopted by the users of the mailing list.

Carleton A. Moore. Investigating Individual Software Development: An Evaluation of the Leap Toolkit. Ph.D. thesis, University of Hawaii, Department of Information and Computer Sciences, August 2000. [ .pdf ]

Software developers work too hard and yet do not get enough done. Developing high quality software efficiently and consistently is a very difficult problem. Developers and managers have tried many different solutions to address this problem. Recently their focus has shifted from the software organization to the individual software developer. For example, the Personal Software Process incorporates many of the previous solutions while focusing on the individual software developer. This thesis presents the Leap toolkit, which combines ideas from prior research on the Personal Software Process, Formal Technical Review and my experiences building automated support for software engineering activities. The Leap toolkit is intended to help individuals in their efforts to improve their development capabilities. Since it is a light-weight, flexible, powerful, and private tool, it provides a novel way for developers to gain valuable insight into their own development process. The Leap toolkit also addresses many measurement and data issues involved with recording any software development process. The main thesis of this work is that the Leap toolkit provides a novel tool that allows developers and researchers to collect and analyze software engineering data. To investigate some of the issues of data collection and analysis, I conducted a case study of 16 graduate students in an advanced software engineering course at the University of Hawaii, Manoa. The case study investigated: (1) the relationship between the Leap toolkit's time collection tools and “collection stage” errors; and (2) different time estimation techniques supported by the Leap toolkit. The major contributions of this research includes (1) the LEAP design philosophy; (2) the Leap toolkit, which is a novel tool for individual developer improvement and software engineering research; and (3) the insights from the case study about collection overhead, collection error and project estimation.

Carleton A. Moore. Lessons learned from teaching reflective software engineering using the Leap toolkit. In Proceedings of the 2000 International Conference on Software Engineering, Workshop on Software Engineering Education, Limerick, Ireland, May 2000. [ .pdf ]

This paper presents our experiences using the Leap toolkit, an automated tool to support personal developer improvement. The Leap toolkit incorporates ideas from the PSP and group review. It relaxes some of the constraints in the PSP and reduces process overhead. Our lessons learned include: (1) Collecting data about software development is useful; (2) Leap enables users to accurately estimate size and time in a known domain; (3) Many users feel their programming skills improve primarily due to practice, not their method; (4) To reduce measurement dysfunction, make the results less visible; (5) Partial defect collection and analysis is still useful; (6) Tool support should require few machine resources; and (7) Experience may lead to overconfidence.

Philip M. Johnson, Carleton A. Moore, Joseph A. Dane, and Robert S. Brewer. Empirically guided software effort guesstimation. IEEE Software, 17(6), December 2000. [ .pdf ]

Monir Hodges. Javajam: Supporting collaborative review and improvement of open source software. M.S. thesis, University of Hawaii, August 2000. [ .pdf ]

Development of Open Source Software is in many cases a collaborative effort, often by geographically dispersed team members. The problem for members is to efficiently review documentation and source code and to collect and share comments and annotations that will lead to improvements in performance, functionality, and quality. javaJAM is a collaborative tool for assisting with the development of Open Source Software. It generates integrated documentation and source code presentations to be viewed over the web. More importantly, javaJAM provides an interactive environment for navigating documentation and source code and for posting annotations. javaJAM creates relationships between sections of documentation, source, and related comments and annotations to provide the necessary cross-referencing to support quick and efficient reviews. javaJAM was evaluated in a classroom setting. Student teams posted projects for team review using javaJAM and found it to be an easy way to review their projects and post their comments.

Mette L. Moffett. A proposal for vcommerce: An internet entrepreneurship environment. Technical Report CSDL-00-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, February 2000. [ .pdf ]

The document proposes the development of an internet entrepreneurship simulation environment called VCommerce for the University of Hawaii Aspect Technology Grant program.

Philip M. Johnson. Vcommerce entrepreneur user guide. Technical Report CSDL-00-07, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 2000. [ .pdf ]

VCommerce is intended to provide you with an educational and stimulating introduction to the initial, "startup" phases of entreprenurial activity in the online, Internet-enabled economy. VCommerce is designed to reward those who can innovate, explore market niches, design viable businesses within the context of the VCommerce world, exploit the information resources of the Internet for business planning, react appropriately to VCommerce market data, and develop effective partnerships with other people with complementary skills. This user guide provides an overview of the VCommerce process.

Philip M. Johnson. Vcommerce example business plan: Pizza portal. Technical Report CSDL-00-08, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 2000. [ .pdf ]

This document provides an example business plan for the VCommerce simulation. It details the design and implementation of a hypothetical business called "Pizza Portal".

Philip M. Johnson. A comparative review of locc and codecount. Technical Report CSDL-00-10, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, November 2000. [ http ]

This paper provides one review of the comparative strengths and weaknesses of <A HREF="http://csdl.ics.hawaii.edu/Tools/LOCC/LOCC.html">LOCC</A> and <A HREF="http://sunset.usc.edu/research/CODECOUNT/index.html">CodeCount</A>, two tools for calculating the size of software source code.

Philip M. Johnson. Aligning the financial services, fulfillment distribution infrastructure, and small business sectors in hawaii through B2B technology innovation. Technical Report CSDL-00-09, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, September 2000. [ .pdf ]

This document is a proposal to the University of Hawaii New Economy Research Grant Program. It describes a study intended to discover business-to-business technologies that have the potential to improve the efficiency and reduce the cost for small Hawaiian businesses that produce physical products and desire to expand into national and international markets.

Mette L. Moffett. The design, development, and evaluation of vcommerce: A virtual environment to support entrepreneurial learning. B.S. Thesis CSDL-00-11, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 2000. [ .pdf ]

This thesis describes VCommerce, a virtual environment whose goal is to significantly increase students' knowledge of the process involved with starting a high tech company, and through hands-on experience enhance their confidence in their ability to start such a company. The thesis presents the design and implementation of the environment, and a case study of its use in a graduate course comprised of 50 students from amongst the computer science, business school, engineering, and other departments. A course survey and fourteen post-semester interviews show that students felt the class was extremely effective in teaching entrepreneurship concepts, and that they have learned valuable lessons about managing an Internet startup.

1999

Carleton A. Moore. Automated support for technical skill acquisition and improvement: An evaluation of the leap toolkit. Ph.D. Thesis Proposal CSDL-98-02, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, November 1999. [ .pdf ]

Software developers work too hard and yet do not get enough done. Developing high quality software efficiently and consistently is a very difficult problem. Developers and managers have tried many different solutions to address this problem. Recently their focus has shifted from the software organization to the individual software developer. The Personal Software Process incorporates many of the previous solutions while focusing on the individual software developer. I combined ideas from prior research on the Personal Software Process, Formal Technical Review and my experiences building automated support for software engineering activities to produce the Leap toolkit. The Leap toolkit is intended to help individuals in their efforts to improve their development capabilities. Since it is a light-weight, flexible, powerful, and private tool, it allows individual developers to gain valuable insight into their own development process. The Leap toolkit also addresses many measurement and data issues involved with recording any software development process. The main thesis of this work is the Leap toolkit provides a more accurate and effective way for developers to collect and analyze their software engineering data than manual methods. To evaluate this thesis I will investigate three claims: (1) the Leap toolkit prevents many important errors in data collection and analysis; (2) the Leap toolkit supports data collection and analyses that are not amenable to manual enactment; and (3) the Leap toolkit reduces the level of “collection stage” errors. To evaluate the first claim, I will show how the design of the Leap toolkit effectively prevents important classes of errors shown to occur in prior related research. To evaluate the second claim, I will conduct an experiment investigating 14 different quantitative time estimation techniques based upon historical size data to show that the Leap toolkit is capable of complex analyses not possible in manual methods. To evaluate the third claim, I will analyze software developers data and conduct surveys to investigate the level of data collection errors.

Carleton A. Moore. Project leap: Personal process improvement for the differently disciplined. In Proceedings of the Doctoral Workshop from the 1999 International Conference on Software Engineering, Los Angeles, CA., May 1999. [ .pdf ]

This paper overviews the research motivations for Project Leap.

Philip M. Johnson and Anne M. Disney. A critical analysis of PSP data quality: Results from a case study. Journal of Empirical Software Engineering, December 1999. [ .pdf ]

The Personal Software Process (PSP) is used by software engineers to gather and analyze data about their work. Published studies typically use data collected using the PSP to draw quantitative conclusions about its impact upon programmer behavior and product quality. However, our experience using PSP led us to question the quality of data both during collection and its later analysis. We hypothesized that data quality problems can make a significant impact upon the value of PSP measures-significant enough to lead to incorrect conclusions regarding process improvement. To test this hypothesis, we built a tool to automate the PSP and then examined 89 projects completed by ten subjects using the PSP manually in an educational setting. We discovered 1539 primary errors and categorized them by type, subtype, severity, and age. To examine the collection problem we looked at the 90 errors that represented impossible combinations of data and at other less concrete anomalies in Time Recording Logs and Defect Recording Logs. To examine the analysis problem we developed a rule set, corrected the errors as far as possible, and compared the original and corrected data. We found significant differences for measures such as yield and the cost-performance ratio, confirming our hypothesis. Our results raise questions about the accuracy of manually collected and analyzed PSP data, indicate that integrated tool support may be required for high quality PSP data analysis, and suggest that external measures should be used when attempting to evaluate the impact of the PSP upon programmer behavior and product quality.

Philip M. Johnson. Reflective software engineering with the leap toolkit. Technical Report CSDL-99-01, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 1999. [ http ]

This document describes a empirical, experience-based approach to software engineering at the individual level using the Leap toolkit.

Joseph A. Dane. Locc user guide. Technical Report CSDL-99-02, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 1999. [ .html ]

This document describes the installation and use of LOCC. LOCC is a general mechanism for producing one or more measurements of the size of work products. LOCC can produce both the "total" size of a work product, as well as the "difference" in size between successive versions of the same work product.

Jennifer M. Geis. A case study of defect detection and analysis with jwiz. Technical Report CSDL-99-04, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, November 1999. [ .pdf ]

This paper presents a study designed to investigate the occurrence of certain kinds of errors in Java programs using JavaWizard (JWiz), a static analysis mechanism for Java source code. JWiz is a tool that supports detection of certain commonly occurring semantic errors in Java programs. JWiz was used within a research framework designed to reveal (1) knowledge about the kinds of errors made by Java programmers, (2) differences among Java programmers in the kinds of errors made, and (3) potential avenues for improvement in the design and/or implementation of the Java language or environment. We found that all programmers inject a few of the same mistakes into their code, but these are only minor, non-defect causing errors. We also found that the types of defects injected vary drastically with no correlation to program size or developer experience. Finally, we found that for those developers who make some of the mistakes that JWiz is designed for, JWiz can be a great help, saving significant amounts of time ordinarily spent tracking down defects in test.

Robert S. Brewer. Aspect technology fund grant proposal: Condensation of educational mailing lists. Grant Application CSDL-99-05, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, February 1999. [ .pdf ]

I propose the extension of the Mailinglist Condensation System to the realm of class support mailing lists in education. Condensed archives of the mailing lists can be used by future students to learn from the students of previous semesters, instead of having the information thrown out at the end of each semester. I will pursue this by piloting the system on two classes in Fall 1999. Furthermore, I show the feasibility of creating a company based on the open source model which will sell service and support for MCS.

Carleton A. Moore. The aspect technology fund grant proposal: Business plan improvement using software engineering principles. Grant Application CSDL-99-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, February 1999. [ .pdf ]

This proposal describes the motivation, organization, and potential products and services for a company that supports the creation of high quality business plans.

Carleton A. Moore. Project LEAP: Addressing measurement dysfunction in review. In Proceedings of the Eighth International Conference on Human-Computer Interaction, Munich, Germany, August 1999. [ .pdf ]

The software industry and academia believe that software review, specifically Formal Technical Review (FTR), is a powerful method for improving the quality of software. Computer support for FTR reduces the overhead of conducting reviews for reviewers and managers. Computer support of FTR also allow for the easy collection of empirical measurement of process and products of software review. These measurements allow researchers or reviewers to gain valuable insights into the review process. After looking closely at review metrics, we became aware of the possibility of measurement dysfunction in formal technical review. Measurement dysfunction is a situation in which the act of measurement affects the organization in a counter-productive fashion, which leads to results directly counter to those intended by the organization for the measurement. How can we reduce the threat of measurement dysfunction in software review without losing the benefits of metrics collection? Project LEAP is our attempt at to answer this question. This paper present Project Leap's approach to the design, implementation, and evaluation of tools and methods for empirically-based, individualized software developer improvement.

Philip M. Johnson. Leap: A “personal information environment” for software engineers. In Proceedings of the 1999 International Conference on Software Engineering, Los Angeles, CA., May 1999. [ .pdf ]

The Leap toolkit is designed to provide Lightweight, Empirical, Anti-measurement dysfunction, and Portable approaches to software developer improvement. Using Leap, software engineers gather and analyze personal data concerning time, size, defects, patterns, and checklists. They create and maintain definitions describing their software development procedures, work products, and project attributes, including document types, defect types, severities, phases, and size definitions. Leap also supports asynchronous software review and facilitates integration of this group-based data with individually collected data. The Leap toolkit provides a “reference model” for a personal information environment to support skill acquisition and improvement for software engineers.

Joseph A. Dane. A proposal for an oahu internet ocean sports resource. Technical Report CSDL-99-09, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, February 1999. [ .pdf ]

This document proposes the development of a geographic information system available over the Internet to help visitors to Hawaii become aware of Ocean sports opportunities.

Joseph A. Dane. Modular program size counting. M.S. thesis, University of Hawaii, December 1999. [ .pdf ]

Effective program size measurement is difficult to accomplish. Factors such as program implementation language, programmer experience and application domain influence the effectiveness of particular size metrics to such a degree that it is unlikely that any single size metric will be appropriate for all applications. This thesis introduces a tool, LOCC, which provides a generic architecture and interface to the production and use of different size metrics. Developers can use the size metrics distributed with LOCC or can design their own metrics, which can be easily incorporated into LOCC. LOCC pays particular attention to the problem of supporting incremental development, where a work product is not created all at once but rather through a sequence of small changes applied to previously developed programs. LOCC requires that developers of new size metrics support this approach by providing a means of comparing two versions of a program. LOCC's effectiveness was evaluated by using it to count over 50,000 lines of Java code, by soliciting responses to a questionnaire sent to users, and by personal reflection on the process of using and extending it. The evaluation revealed that users of LOCC found that it assisted them in their development process, although there were some improvements which could be made.

Philip M. Johnson. Java-based software engineering technology for high quality development in "internet time" organizations. Technical Report CSDL-99-11, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, February 1999. [ .html ]

This grant will support deployment and evaluation of four software engineering technologies to support high quality development in "Internet Time" environments. The Leap toolset supports technical skill acquisition. MCS supports improves the capability of mailing lists to provide technical support. OpenJavaDoc facilitates open source distribution and software development. The JavaWizard Internet Trial provides community-wide statistics on Java programming errors. The research projects will be structured to allow Sun developers with early access to the systems, to provide tangible software engineering benefits to Sun development groups, and to enable Sun developers to provide feedback that can influence future development.

Carleton A. Moore. Teaching software engineering skills with the leap toolkit. Technical Report CSDL-99-12, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, November 1999. [ .pdf ]

The Personal Software Process (PSP) teaches software developers many valuable software engineering techniques. Developers learn how to develop high quality software efficiently and how to accurately estimate the amount of effort it will take. To accomplish this the PSP forces the developer to follow a very strict development model, to manually record time, to defect and size data, and analyze their data. The PSP appears successful at improving developer performance during the training, yet there are questions concerning long-term adoption rates and the accuracy of PSP data. This paper presents our experiences using the Leap toolkit, an automated tool to support personal developer improvement. The Leap toolkit incorporates ideas from the PSP and group review. It relaxes some of the constraints in the PSP and reduces process overhead. We are using the Leap toolkit in an advanced software engineering course at the University of Hawaii, Manoa.

Philip M. Johnson. Project leap: Lightweight, empirical, anti-measurement dysfunction, and portable software developer improvement. ACM Software Engineering Notes, 24(6), December 1999. [ .pdf ]

Project LEAP investigates the use of lightweight, empirical, anti-measurement dysfunction, and portable approaches to software developer improvement. This document provides a one-page progress report on Project Leap for inclusion in the "Millenium" issue of Software Engineering Notes.

Philip M. Johnson, Audris Mockus, and Larry Votta. A controlled experimental study of the personal waterslide process: Results and interpretations. Technical Report CSDL-00-12, Waterslide Engineering Institute, Oulu, Finland, June 1999. [ .pdf ]

The paper reports on the Personal Waterslide Process, an innovative software engineering technique pioneered during the 1999 meeting of the International Software Engineering Research Network at its annual meeting in Oulu, Finland.

1998

Philip M. Johnson. Reengineering inspection: The future of formal technical review. Communications of the ACM, 41(2):49-52, February 1998. [ .pdf ]

Formal technical review is acknowledged as a preeminant software quality improvement method. The “inspection” review method, first introduced by Michael Fagan twenty years ago, has led to dramatic improvements in software quality. It has also led to a myopia within the review community, which tends to view inspection-based methods as not just effective, but as the optimal approach to formal technical review. This article challenges this view by presenting a taxonomy of software review that shows inspection to be just one among many valid approaches. The article then builds upon this framework to propose seven guidelines for the radical redesign and improvement of formal technical review during the next twenty years.

Philip M. Johnson and Danu Tjahjono. Does every inspection really need a meeting? Journal of Empirical Software Engineering, 4(1):9-35, January 1998. [ .ps.Z ]

Software review is a fundamental component of the software quality assurance process, yet significant controversies surround the most efficient and effective review method. A central question surrounds the use of meetings; traditional review practice views them as essential, while more recent findings question their utility. To provide insight into this question, we conducted a controlled experiment to assess several measures of cost and effectiveness for a meeting and non-meeting-based review method. The experiment used CSRS, a computer mediated collaborative software review environment, and 24 three person groups. We found that the meeting-based review method studied was significantly more costly than the non-meeting-based method, but that meeting-based review did not find significantly more defects than the non-meeting-based method. However, the meeting-based review method was significantly better at reducing the level of false positives, and subjects subjectively preferred meeting-based review over non-meeting-based review. This paper presents the motivation for this experiment, its design and implementation, our empirical findings, pointers to Internet repositories for replication or additional analysis of this experiment, conclusions, and future directions.

Jennifer M. Geis. Javawizard: Investigating defect detection and analysis. M.S. thesis, University of Hawaii, May 1998. [ .pdf ]

This thesis presents a study designed to investigate the occurrence of certain kinds of errors in Java programs using JavaWizard (JWiz), a static analysis mechanism for Java source code. JWiz is an extensible tool that supports detection of certain commonly occurring semantic errors in Java programs. For this thesis, I used JWiz within a research framework designed to reveal (1) knowledge about the kinds of errors made by Java programmers, (2) differences among Java programmers in the kinds of errors made, and (3) potential avenues for improvement in the design and/or implementation of the Java language or environment. I performed a four week case study, collecting data from 14 students over three programming projects which produced approximately 12,800 lines of code. The JWiz results were categorized into three types: functional errors (must be fixed for the program to work properly, maintenance errors (program will work, but considered to be bad style), and false positives (intended by the developer). Out of 235 JWiz warnings, there were 69 functional errors, 100 maintenance errors, and 66 false positives. The fix times for the functional errors added up to five and a half hours, or 7.3 percent of the total amount of time spent debugging in test. I found that all programmers inject a few of the same mistakes into their code, but these are only minor, non-defect causing errors. I found that the types of defects injected vary drastically with no correlation to program size or developer experience. I also found that for those developers who make some of the mistakes that JWiz is designed for, JWiz can be a great help, saving significant amounts of time ordinarily spent tracking down defects in test.

Anne M. Disney and Philip M. Johnson. Investigating data quality problems in the PSP. In Sixth International Symposium on the Foundations of Software Engineering (SIGSOFT'98), Orlando, FL., November 1998. [ .pdf ]

The Personal Software Process (PSP) is used by software engineers to gather and analyze data about their work. Published studies typically use data collected using the PSP to draw quantitative conclusions about its impact upon programmer behavior and product quality. However, our experience using PSP in both industrial and academic settings revealed problems both in collection of data and its later analysis. We hypothesized that these two kinds of data quality problems could make a significant impact upon the value of PSP measures. To test this hypothesis, we built a tool to automate the PSP and then examined 89 projects completed by ten subjects using the PSP manually in an educational setting. We discovered 1539 primary errors and categorized them by type, subtype, severity, and age. To examine the collection problem we looked at the 90 errors that represented impossible combinations of data and at other less concrete anomalies in Time Recording Logs and Defect Recording Logs. To examine the analysis problem we developed a rule set, corrected the errors as far as possible, and compared the original and corrected data. This resulted in significant differences for measures such as yield and the cost-performance ratio, confirming our hypothesis. Our results raise questions about the accuracy of manually collected and analyzed PSP data, indicate that integrated tool support may be required for high quality PSP data analysis, and suggest that external measures should be used when attempting to evaluate the impact of the PSP upon programmer behavior and product quality.

Anne M. Disney, Jarrett Lee, Tuan Huynh, and Jennifer Saito. Investigating the design and evaluation of research web sites. Technical Report CSDL-98-05, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 1998. [ .html ]

The Aziza design group (formally 691 Web Development Team) was commissioned by CSDL to implement a new web site. The group was assigned not only to update the entire site, but also to research and investigate the process and life cycle of World Wide Web site development. This research document records the process and products that occurred while updating of the CSDL web site. It discusses issues such as the balance between providing information and providing an image of the group, and ways to share research information over the World Wide Web. To back the data researched, evaluations by the various users of the site occurred and are discussed here. This document records our web site design processes, what insights we had about those processes, our findings, and finally, our conclusions.

Anne M. Disney, Jarrett Lee, Tuan Huynh, and Jennifer Saito. Csdl web site requirements specification document. Technical Report CSDL-98-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, April 1998. [ .html ]

The purpose of this document is to summarize the results of our background research for the CSDL web site, and describe the resulting requirements for evaluation and review.

Robert S. Brewer. Improving mailing list archives through condensation. M.S. Thesis Proposal CSDL-98-07, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, September 1998. [ .pdf ]

Electronic mailing lists are popular Internet information sources. Many mailing lists maintain an archive of all messages sent to the list which is often searchable using keywords. While useful, these archives suffer from the fact that they include all messages sent to the list. Because they include all messages, the ability of users to rapidly find the information they want in the archive is hampered. To solve the problems inherent in current mailing list archives, I propose a process called condensation whereby one can strip out all the extraneous, conversational aspects of the data stream leaving only the pearls of interconnected wisdom. To explore this idea of mailing list condensation and to test whether or not a condensed archive of a mailing list is actually better than traditional archives, I propose the construction and evaluation of a new software system. I name this system the Mailing list Condensation System or MCS. MCS will have two main parts: one which is dedicated to taking the raw material from the mailing list and condensing it, and another which stores the condensed messages and allows users to retrieve them. The condensation process is performed by a human editor (assisted by a tool), not an AI system. While this adds a certain amount of overhead to the maintenance of the MCS-generated archive when compared to a traditional archive, it makes the system implementation feasible. I believe that an MCS-generated mailing list archive maintained by an external researcher will be adopted as a information resource by the subscribers of that mailing list. Furthermore, I believe that subscribers will prefer the MCS-generated archive over existing traditional archives of the mailing list. This thesis will be tested by a series of quantitative and qualitative measures.

Anne M. Disney. Data quality problems in the personal software process. M.S. thesis, University of Hawaii, August 1998. [ .pdf ]

The Personal Software Process (PSP) is used by software engineers to gather and analyze data about their work and to produce empirically based evidence for the improvement of planning and quality in future projects. Published studies have suggested that adopting the PSP results in improved size and time estimation and in reduced numbers of defects found in the compile and test phases of development. However, personal experience using PSP in both industrial and academic settings caused me to wonder about the quality of two areas of PSP practice: collection and analysis. To investigate this I built a tool to automate the PSP and then examined 89 projects completed by nine subjects using the PSP in an educational setting. I discovered 1539 primary errors and analyzed them by type, subtype, severity, and age. To examine the collection problem I looked at the 90 errors that represented impossible combinations of data and at other less concrete anomalies in Time Recording Logs and Defect Recording Logs. To examine the analysis problem I developed a rule set, corrected the errors as far as possible, and compared the original and corrected data. This resulted in substantial differences for numbers such as yield and the cost-performance ratio. The results raise questions about the accuracy of published data on the PSP and directions for future research.

Philip M. Johnson and Anne M. Disney. The personal software process: A cautionary case study. IEEE Software, 15(6), November 1998.

In 1995, Watts Humphrey introduced the Personal Software Process in his book, A Discipline for Software Engineering. Programmers who use the PSP gather measurements related to their own work products and the process by which they were developed, then use these measures to drive changes to their development behavior. After almost three years of teaching and using the PSP, we have experienced the educational benefits of the PSP. As researchers, however, we have also uncovered evidence of certain limitations, which we believe can help improve appropriate adoption and evaluation of the method by industrial and academic practitioners. This paper presents an overview of a case study we performed that presents evidence of potential data quality problems, along with recommendations for those interested in adopting PSP within industry or academia.

Jennifer M. Geis. Javawizard user guide. Technical Report CSDL-98-15, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 1998. [ .html ]

This document describes the use of JavaWizard, an automated code checker for the Java programming language. The user guide includes directions for installation, command line invocation, and graphical user interface invocation.

1997

Philip M. Johnson and Danu Tjahjono. Assessing software review meetings: A controlled experimental study using CSRS. In Proceedings of the 1997 International Conference on Software Engineering, pages 118-127, Boston, MA., May 1997. [ .pdf ]

Software review is a fundamental component of the software quality assurance process, yet significant controversies exist concerning efficiency and effectiveness of various review methods. A central question surrounds the use of meetings: traditional review practice views them as essential, while more recent findings question their utility. We conducted a controlled experimental study to assess several measures of cost and effectiveness for a meeting and non-meeting-based review method. The experiment used CSRS, a computer mediated collaborative software review environment, and 24 three person groups. Some of the data we collected included: the numbers of defects discovered, the effort required, the presence of synergy in the meeting-based groups, the occurrence of false positives in the non-meeting-based groups, and qualitative questionnaire responses. This paper presents the motivation for this experiment, its design and implementation, our empirical findings, conclusions, and future directions.

Philip M. Johnson. PSP/Baseline: Software requirements specification. Technical Report CSDL-96-19, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 1997. [ .html ]

PSP/Baseline is a system design that predated Project LEAP by about a year. The PSP/Baseline system was intended to provide an approach to empirical software process improvement inspired by, but different from, the Personal Software Process.

Adam A. Porter and Philip M. Johnson. Assessing software review meetings: Results of a comparative analysis of two experimental studies. IEEE Transactions on Software Engineering, 23(3):129-145, March 1997.

Software review is a fundamental tool for software quality assurance. Nevertheless, there are significant controversies as to the most efficient and effective review method. One of the most important questions currently being debated is the utility of meetings. Although almost all industrial review methods are centered around the inspection meeting, recent findings call their value into question. In prior research the authors of this paper separately and independently conducted controlled experimental studies to explore this issue. This paper presents new research to understand the broader implications of these two studies. To do this, we designed and carried out a process of “reconciliation” in which we established a common framework for the comparison of the two experimental studies, re-analyzed the experimental data with respect to this common framework, and compared the results. Through this process we found many striking similarities between the results of the two studies, strengthening their individual conclusions. It also revealed interesting differences between the two experiments, suggesting important avenues for future research.

Philip M. Johnson. An annotated overview of CSDL software engineering. Technical Report CSDL-97-05, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, November 1997. [ .html ]

Current software engineering activities in CSDL can be viewed as consisting of two basic components: product engineering and process engineering. Product engineering refers to the various work products created during development. Process engineering refers to the various measurements and analyses performed on the development process. This document describes activities within CSDL over the past five years to better understand and improve our process and product engineering within our academic research development environment.

Philip M. Johnson. LEAP initial toolset: Software requirements specification. Technical Report CSDL-97-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, October 1997. [ .html ]

This SRS for the LEAP Toolset is based heavily upon the ideas specified in the PSP/Baseline SRS. Conceptually, the LEAP toolset is a variant of the PSP/Baseline toolset in two major ways. First, the LEAP toolset is substantially more simple to implement and use. It will serve as a prototype for proof-of-concept evaluation of the ideas in the PSP/Baseline toolkit. Second, the LEAP toolset emphasizes group review and minimization of measurement dysfunction to a greater extent than the PSP/Baseline toolset.

Philip M. Johnson. Project LEAP: Lightweight, empirical, anti-measurement dysfunction, and portable software developer improvement. Technical Report CSDL-97-08, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, October 1997. [ .pdf ]

Project LEAP investigates the use of lightweight, empirical, anti-measurement dysfunction, and portable approaches to software developer improvement. A lightweight method involves a minimum of process constraints, is relatively easy to learn, is amenable to integration with existing methods and tools, and requires only minimal management investment and commitment. An empirical method supports measurements that can lead to improvements in the software developers skill. Measurement dysfunction refers to the possibility of measurements being used against the programmer, so the method must take care to collect and manipulate measurements in a “safe” manner. A portable method is one that can be applied by the developer across projects, organizations, and companies during her career. Project LEAP will investigate the strengths and weaknesses of this approach to software developer improvement in a number of ways. First, it will enhance and evaluate a LEAP-compliant toolset and method for defect entry and analysis. Second, it will use LEAP-compliant tools to explore the quality of empirical data collected by the Personal Software Process. Third, it will collect data from industrial evaluation of the toolkit and method. Fourth, it will create component-based versions of LEAP-compliant tools for defect and time collection and analysis that can be integrated with other software development environment software. Finally, Project LEAP will sponsor a web site providing distance learning materials to support education of software developers in empirically guided software process improvement. The web site will also support distribution and feedback of Project LEAP itself.

Philip M. Johnson. A proposal for CSDL2: A center for software development leadership through learning. Technical Report CSDL-98-03, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 1997. [ .html ]

This document describes the design of CSDL2: a social, physical, and virtual environment to support the development of world class software engineering professionals. In CSDL2, a "multi-generational learning community" of faculty, graduate students, and undergraduates all collaborate within a structured work environment for practicing product, process, and organizational engineering.

1996

Philip M. Johnson. Design for instrumentation: High quality measurement of formal technical review. Software Quality Journal, 5(3):33-51, March 1996. [ .pdf ]

All current software quality assurance methods incorporate some form of formal technical review (FTR), because structured analysis of software artifacts by a team of skilled technical personnel has demonstrated ability to improve quality. However, FTR methods come in a wide variety of forms with varying effectiveness, incur significant overhead on technical staff, and have little computer support. Measurements of these FTR methods are coarse-grained, frequently low quality, and expensive to obtain. This paper describes CSRS, a highly instrumented, computer-supported system for formal technical review, and shows how it is designed to collect high quality, fine-grained measures of FTR process and products automatically. The paper also discusses some results from over one year of experimentation with CSRS; describes how CSRS improves current process improvement approaches to FTR; and overviews several novel research projects on FTR that are made possible by this system.

Philip M. Johnson. From principle-centered to organization-centered design: A case study of evolution in a computer-supported formal technical review environment. In Interdisciplinary Approaches to System Analysis and Design, July 1996. [ .ps ]

Design of new computer-based environments to replace or augment traditional, manual work procedures is typically problematic due to its experimental and embedded nature: the requirements for the computer-based version of the task may not be well defined, and the outcome of introducing computer-based support may well change the nature of the task altogether. This paper illustrates these issues through a discussion of the evolution in the design of CSRS, an instrumented, computer-supported cooperative work environment for formal technical review. CSRS was originally designed in response to well-recognized shortcomings in traditional, non-computer-based formal technical review methods. The initial design was thus founded upon a principle-centered basis, where the system was oriented toward solving known problems in the domain of formal technical review. Over time, the design has evolved toward a more organization-centered basis, in which the system is oriented toward support for adoption and use within organizations, even if that support conflicts with the “principles” of formal technical review. We conjecture that such an evolution may be inevitable in experimental and embedded design domains.

Danu Tjahjono. Exploring the effectiveness of formal technical review factors with CSRS, a collaborative software review system. Ph.D. thesis, Department of Information and Computer Sciences, University of Hawaii, August 1996. [ .pdf ]

Formal Technical Review (FTR) plays an important role in modern software development. It can improve the quality of software products and the quality and productivity of their development processes. However, the effectiveness of current FTR practice is hampered by uncertainty and ambiguity. This research investigated two issues. First, what differences exist among current FTR methods? Second, what are potential review factors that impact upon the effectiveness of these methods? The approach taken by this research was to first develop a FTR framework, based on a review of literature in the field. The framework allows one to determine the similarities and differences between the review process of FTR methods, as well as to identify potential review factors. Specifically, it describes a review method in terms of seven components of a review process: phase, objective, degree of collaboration, synchronicity, role, technique, entry/exit criteria. By looking at the values of individual components, one can compare and contrast different FTR methods. Furthermore, by investigating these values empirically, one can methodically improve the practice of FTR. Second, a computer based review system, called CSRS, was developed to implement the framework. The system provides a set of declarative modeling languages, which allow one to create a wide variety of FTR methods, or to design experiments to compare the performance of two or more review methods, or to evaluate a set of review factors within a method. Finally, this research involved an empirical study using CSRS to investigate the effectiveness of a group process versus an individual process in finding program faults. Two review methods/systems were implemented using CSRS: EGSM (used by real groups) and EIAM (used by nominal groups). The experiment involved 24 groups of students (3 students per group), each reviewing two sets of source code, once using EGSM and once using EIAM. The experiment found that there were no significant differences in detection effectiveness between the two methods, that synergy was observed in EGSM but did not contribute significantly to the total faults found, and that EGSM incurred higher cost than EIAM, but was significantly more effective in filtering out false positives.

Jennifer M. Geis. An evaluation of Flashmail: a computer-mediated communication tool. Technical Report CSDL-95-21, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 1996. [ .pdf ]

This paper presents the results from an analysis of a new computer-mediated communication tool called Flashmail. I investigated how people used Flashmail as well as Flashmail's relationship to conventional electronic mail. Participants in the experiment loaded extensions that gathered data regarding the characteristics of all messages sent through E-mail and Flashmail. This data was used to analyze the conditions under which each system was used. I found that Flashmail seems to be preferred whenever the message is short, needs to be communicated in a short period of time, and when both the recipient and the sender are logged into the system and active at the time of sending. In contrast, I found that E-mail was preferred for messages that were large (over 400 characters) and non-urgent, or when the receiver was either not logged into Flashmail or had been idle for longer than 7 minutes. These results indicate that Flashmail is generally used as a rapid, synchronous messaging method.

Philip M. Johnson. Egret: A framework for advanced CSCW applications. ACM Software Engineering Notes, 21(2), May 1996. [ .pdf ]

Egret is a publically available, advanced framework for construction of computer-supported cooperative work applications. Egret provides an approach to multi-user, interactive application development that differs markedly from other frameworks or infrastructures, such as Groupkit, WWW, or Lotus Notes. This short paper introduces Egret, its architecture, design philosophy, selected applications, and interest groups within the software engineering community. It concludes with information on how Egret's sources, binaries, and documentation may be obtained free of charge using the Internet.

Danu Tjahjono. CSRS design reference 3.5.0. Technical Report CSDL-96-01, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, February 1996. [ .pdf ]

David Brauer, Philip M. Johnson, and Carleton A. Moore. Requiem for the project hi-time collaborative process. Technical Report CSDL-96-04, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, March 1996. [ .pdf ]

In early 1995, the State of Hawaii began work on an ambitious revision to its telecommunications policy planning process. A multidisciplinary team was commissioned to develop a proposal for an iterative, interactive, computer-mediated collaborative planning process whereby the State's telecommunications infrastructure plan could be developed and periodically upgraded to reflect technology and policy shifts in local communities. The proposal included a sophisticated, CSCW software system called HI-TIME which would both enact the planning process as well as provide access and visibility into the planning process for the general public. In early 1996, the ambitious collaborative planning process, including the implemented, deployed HI-TIME system, was abandoned in favor of a more traditional approach. This paper explores the rise and fall of Project HI-TIME and the lessons it holds for developers of CSCW systems.

Philip M. Johnson. State as an organizing principle for CSCW architectures. Technical Report CSDL-96-05, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, March 1996. [ .pdf ]

A useful way to gain insight into collaborative architectures is by analyzing how they collect, represent, store, analyze, and distribute state information. This paper presents state as an organizing principle for collaborative architectures. It uses a framework with eight dimensions to analyze four systems: WWW, GroupKit, Lotus Notes, and Egret. The analysis illuminates similarities and differences between these architectures, and yields two conjectures: that no single collaborative architecture can fully support both collaboration-in-the-small and collaboration-in-the-large, and that flexible and efficient support for state management requires architectural support for agents as first-class users.

Philip M. Johnson. BRIE: A Benchmark Inspection Experiment. Technical Report CSDL-96-13, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, September 1996. [ .html ]

The BenchmaRk Inspection Experiment (BRIE) is an attempt to design and package a simple experimental design that satisfies the goals of a benchmark experiment. The BRIE acronym has a second expansion: Basic RevIew Education. BRIE is designed to have a second, complementary goal: a high quality training package for a simple formal technical review method. Thus, BRIE is a curriculum module intended to be useful in either an industry or academic setting to introduce students to both software review and empirical software engineering research practice.

Philip M. Johnson. Measurement dysfunction in formal technical review. Technical Report CSDL-96-16, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, November 1996. [ .html ]

This paper explores some of the issues that arise in effective use of measures to monitor and improve formal technical review practice in industrial settings. It focuses on measurement dysfunction: a situation in which the act of measurement affects the organization in a counter-productive fashion, which leads to results directly counter to those intended by the organization for the measurement.

1995

Philip M. Johnson, Carleton A. Moore, and Rosemary Andrada. HBS interface specification. Technical Report CSDL-94-14, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, February 1995. [ .pdf ]

This document specifies the interface protocol observed between the HBS server system and the ECS client system, together known as Egret. HBS is a multiuser, database server for (non-video) hypermedia applications. It manages storage, locking, retrieval, and inter-client communications. This document is intended to describe the interface between HBS and ECS in enough detail so that alternative database servers can be built to service requests from an ECS clients. It is also intended to serve as a source of reference material for maintainers of the HBS and ECS systems.

Philip M. Johnson and Carleton A. Moore. Investigating strong collaboration with the Annotated Egret Navigator. In Proceedings of the Fourth IEEE Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises (WET-ICE '95), April 1995. [ .pdf ]

The Annotated Egret Navigator (AEN) is a system designed to support strong collaboration among a group as they cooperatively build, review, revise, and improve a structured hypertext document. AEN was used as the central instructional and research system for a graduate seminar on collaborative systems at the University of Hawaii during Fall, 1994. AEN was used for over 285 hours during the second half of the semester alone, and users generated over 800 nodes and 800 links. Lessons learned about strong collaboration include: (1) Users as well as artifacts should be visible; (2) Provide direct and indirect authoring mechanisms; (3) Provide context-sensitive change information; (4) Provide access to intermediate work products; (5) Maintain database integrity; (6) The WWW is not effective for strong collaboration; and (7) An agent-based architecture may be necessary for systems supporting strong collaboration.

Carleton A. Moore. HBS design document. Technical Report CSDL-95-03, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, February 1995. [ .pdf ]

HBS is an 11 KLOC Hypertext Multiuser Database Server written in C++. HBS is designed to work with ECS clients, as part of the Egret client-server system. HBS is broken down into four blocks, File Operations, Basic Hypertext Operations, Events and Locks, and Client/Server Operations. There is also a built in debugging mechanism and memory leak detection system. This document describes the internal design of HBS.

Carleton A. Moore. Supporting authoring and learning in a strongly collaborative hypertext system: The annotated egret navigator. M.S. Thesis CSDL-95-04, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 1995. [ .pdf ]

With the increased use of hypertext, the issues behind collaborative authoring of hypertext are becoming more important. This thesis presents the Annotated Egret Navigator (AEN), a system designed to support strong collaboration among a group as they cooperatively build, review, revise, improve and learn from a structured hypertext document. AEN addresses how strong collaboration can be supported through computer mediation. It is designed to support collaborative creation of hypertext and to instrument the actions of its users in order to understand how such creation occurs.

Danu Tjahjono. Building software review systems using CSRS. Technical Report CSDL-95-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 1995. [ .pdf ]

The importance of Software Review or Formal Technical Review (FTR) and its benefits have been well documented. However, there are many variations of the method in practice, especially those related to the group process. This paper discusses a new approach to how organizations can build their own review systems that are most suitable to them. Our basic approach is to use CSRS modeling languages to characterize the review method descriptively. The language descriptions are then compiled to generate the corresponding review systems. CSRS modeling languages are developed based on FTR framework which models both variations in the group process and review strategies exhibited by current FTR methods.

Danu Tjahjono. Comparing the cost effectiveness of group synchronous review method and individual asynchronous review method using CSRS: Results of pilot study. Technical Report CSDL-95-07, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 1995. [ .pdf ]

This document describes a pilot experiment that compares the cost effectiveness of a group-based review method (EGSM) to that of an individual-based review method (EIAM) using CSRS. In this pilot study, no significant differences in review effectiveness and review cost were found. This document provides complete details on the procedures and outcomes from this pilot study, as well as the lessons learned which will be applied to an upcoming experimental study.

Philip M. Johnson. The Egret Primer: A tutorial guide to coordination and control
in interactive client-server-agent applications. Technical Report CSDL-95-10, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, June 1995. [ .pdf ]

Rosemary Andrada. Building community through the world wide web. M.S. Thesis CSDL-95-11, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 1995. [ .pdf ]

This thesis presents a case study designed to assess the strengths and weaknesses of a computer-based approach to improving the sense of community within one organization, the Department of Information and Computer Sciences at the University of Hawaii. The case study used a pretest-posttest design. First, several measures of the sense of community within the department were obtained via a questionnaire. Second, a World Wide Web information system was introduced in an effort to affect the level of community within the department. Third, a similar questionnaire was administered after a period of four months. Analysis of the survey responses and system logs showed that the information system designed to promote community in this organization had instead polarized it. However, these systems can also serve as a diagnostic tool for discovering what factors may help promote or inhibit community building.

Carleton A. Moore. WET ICE tools working group report. Technical Report CSDL-95-12, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 1995. [ .pdf ]

Danu Tjahjono. Results of CSRS experiments. Technical Report CSDL-95-13, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 1995.

This document provides the data collected from two experiments on software review conducted using CSRS during the Spring of 1995.

Philip M. Johnson. The CA/M architecture for Project HI-TIME. Technical Report CSDL-95-14, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, November 1995. [ .pdf ]

This document reports on the work done as part of the Project “Collaboration Mechanisms for Project HI-TIME: Hawaii Telecommunications Infrastructure Modernization and Expansion: A Model for Statewide Strategic Planning”, Subcontract 131030-002. In the project, in response to the requirements for Project HI-TIME, a collaborative architecture called “CA/M” has been designed and implemented and used to build a collaborative system for Project HI-TIME. This report documents the current state of the project, providing an overview of Project HI-TIME requirements, the CA/M architecture designed in response to the these requirements, and the status of research on this project.

Julio Polo. A quick guided tour of Shemacs. Technical Report CSDL-95-16, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, October 1995. [ .pdf ]

This is a quick guided tour through the main features of Shemacs, a concurrent editor built using the Egret collaborative framework.

Danu Tjahjono and Philip M. Johnson. FTArm user's guide (version 1.2.0). Technical Report CSDL-95-18, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, October 1995. [ .pdf ]

This manual provides a description of FTArm system for review participants and the administrator. FTArm is a computer-mediated process for software review based upon Egret, a framework for collaborative systems. This document includes descriptions of how to execute the review process, what review artifacts are involved, and the associated user commands to manipulate the artifacts and the process.

Danu Tjahjono and Philip M. Johnson. FTArm demonstration guide (version 1.2.0). Technical Report CSDL-95-19, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, October 1995. [ .pdf ]

This document provides a step-by-step demonstration of the simple use of the CSRS system using the FTArm review method. FTArm is a computer-mediated process for software review based upon Egret, a framework for collaborative systems.

Rosemary Andrada. The effect of a virtual world wide web community on its physical counterpart. Technical Report CSDL-95-20, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 1995. [ .pdf ]

This paper overviews a study that assessed the strengths and weaknesses of a computer-based approach to improving the sense of community within one organization, the Department of Information and Computer Sciences at the University of Hawaii. The case study used a pretest-posttest design. First, several measures of the sense of community within the department were obtained via a questionnaire. Second, a World Wide Web information system was introduced in an effort to affect the level of community within the department. Third, a similar questionnaire was administered after a period of four months. Analysis of the survey responses and system logs showed that the information system designed to promote community had instead polarized some of its members. In addition, the system served as a valuable diagnostic tool for discovering what factors may help promote or inhibit community building.

Carleton A. Moore. Strong collaboration in AEN. Technical Report CSDL-95-22, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 1995. [ .pdf ]

This paper overviews the Annotated Egret Navigator (AEN), a system designed to support strong collaboration among a group as they cooperatively build, review, revise, improve and learn from a structured hypertext document. AEN addresses how strong collaboration can be supported through computer mediation. It is designed to support collaborative creation of hypertext and to instrument the actions of its users in order to understand how such creation occurs.

1994

Philip M. Johnson. Experiences with EGRET: An exploratory group work environment. Collaborative Computing, 1(1), January 1994. [ .pdf ]

Exploratory collaboration occurs in domains where the structure and process of group work evolves as an intrinsic part of the collaborative activity. Traditional database and hypertext structural models do not provide explicit support for collaborative exploration. EGRET is an implemented environment for the development of domain-specific collaborative systems that defines a novel data and a process model along with services for exploratory collaboration. To accomplish this, EGRET departs from traditional notions of the relationship between schema and instance structure. In EGRET, schema structure is viewed as a representation of the current state of consensus among collaborators, from which instance structure is allowed to depart in a controlled fashion. To provide such exploratory services in a responsive interactive environment, EGRET implements specialized architectural mechanisms. This paper presents the concepts and implications of exploratory collaboration, followed by the design and implementation of EGRET. The paper concludes with our results to date, which demonstrate that EGRET succeeds in providing useful services for exploratory collaboration, through interesting technical and cultural issues remain to be addressed before exploratory collaboration can become commonplace in CSCW systems.

Robert S. Brewer and Philip M. Johnson. Collaborative classification and evaluation of usenet. Technical Report CSDL-93-13, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 1994. [ .pdf ]

Usenet is an example of the potential and problems of the nascent National Information Infrastructure. While Usenet makes an enormous amount of useful information available to its users, the daily data overwhelms any user who tries to read more than a fraction of it. This paper presents a collaboration-oriented approach to information classification and evaluation for very large, dynamic database structures such as Usenet. Our approach is implemented in a system called URN, a multi-user, collaborative, hypertextual Usenet reader. We show that this collaborative method, coupled with an adaptive interface, radically improves the overall relevance level of information presented to a user.

Dadong Wan. CLARE: A Computer-Supported Collaborative Learning Environment Based on the Thematic Structure of Scientific Text. Ph.D. thesis, University of Hawaii, Department of Information and Computer Sciences, May 1994. [ .pdf ]

This dissertation presents a computer-based collaborative learning environment, called CLARE, that is based on the theory of learning as collaborative knowledge building. It addresses the question, "what can a computer do for a group of learners beyond helping them share information?" CLARE differs from virtual classrooms and hypermedia systems in three ways. First, CLARE is grounded on the theory of meaningful learning, which focuses the role of meta-knowledge in human learning. Instead of merely allowing learners to share information, CLARE provides an explicit meta-cognitive framework, called RESRA, to help learners interpret information and build knowledge. Second, CLARE defines a new group process, called SECAI, that guides learners to systematically analyze, relate, and discuss scientific text through a set of structured steps: summarization, evaluation, comparison, argumentation, and integration. Third, CLARE provides a fine-grained, non-obtrusive instrumentation mechanism that keeps track of the usage process of its users. Such data forms an important source of feedback for enhancing the system and a basis for rigorously studying collaboration learning behaviors of CLARE users. CLARE was evaluated through sixteen usage sessions involving six groups of students from two classes. The experiments consist of a total of about 300 hours of usage and over 80,000 timestamps. The survey shows that about 70of learners think that CLARE provides a novel way of understanding scientific text, and about 80novel way of understanding their peers' perspectives. The analysis of the CLARE database and the process data also reveals that learners differ greatly in theirinterpretations of RESRA, strategies for comprehending the online text, and understanding of the selected artifact. It also found that, despite the large amount of time spent on summarization, up to 66these learners often fail to correctly represent important features of scientific text and the relationships between those features. Implications of these findings at the design, empirical, and pedagogical levels are discussed.

Philip M. Johnson. An instrumented approach to improving software quality through formal technical review. In Proceedings of the 16th International Conference on Software Engineering, May 1994. [ .pdf ]

Formal technical review (FTR) is an essential component of all software quality assessment, assurance, and improvement techniques. However, current FTR practice leads to significant expense, clerical overhead, group process obstacles, and research methodology problems. CSRS is an instrumented, computer-supported cooperative work environment for formal technical review. CSRS addresses problems in the practice of FTR by providing computer support for both the process and products of FTR. CSRS also addresses problems in research on FTR through instrumentation supporting fine-grained, high quality data collection and analysis. This paper describes CSRS, a computer-mediated review method called FTArm, and selected findings from their use to explore issues in formal technical review.

Dadong Wan and Philip M. Johnson. Computer-supported collaborative learning using CLARE: the approach and experimental findings. In Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work, Chapel Hill, North Carolina, October 1994. [ .pdf ]

Current collaborative learning systems focus on maximizing shared information. However, “meaningful learning” is not simply information sharing but, more importantly, knowledge construction. CLARE is a computer-supported learning environment that facilitates meaningful learning through collaborative knowledge construction. CLARE provides a semi-formal representation language called RESRA and an explicit process model called SECAI. Experimental evaluation through 300 hours of classroom usage indicates that CLARE does support meaningful learning, and that a major bottleneck to computer-mediated knowledge construction is summarization. Lessons learned through the design and evaluation of CLARE provide new insights into both collaborative learning systems and collaborative learning theories.

Philip M. Johnson. Report on the 1993 ECSCW tools and technologies workshop. In SIGOIS Bulletin, April 1994. [ http ]

Robert S. Brewer and Philip M. Johnson. Toward collaborative knowledge management within large, dynamically structured information systems. Technical Report CSDL-94-02, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 1994. [ .pdf ]

Usenet is an example of the potential and problems of the nascent National Information Infrastructure. While Usenet makes an enormous amount of useful information available to its users, the daily data overwhelms any user who tries to read more than a fraction of it. This paper presents a collaboration-oriented approach to knowledge management and evaluation for very large, dynamic database structures such as Usenet. Our approach is implemented in a system called URN, a multi-user, collaborative, hypertextual Usenet reader. Empirical evaluation of this system demonstrates that this collaborative method, coupled with an adaptive interface, improves the overall relevance level of information presented to a user. Finally, the design of this system provides important insights into general collaborative knowledge management mechanisms for very large, dynamically structured database systems such as Usenet and the upcoming Information Superhighway.

Philip M. Johnson. Supporting technology transfer of formal technical review through a computer supported collaborative review system. In Proceedings of the Fourth International Conference on Software Quality, Reston, VA., October 1994. [ .pdf ]

Formal technical review (FTR) is an essential component of all modern software quality assessment, assurance, and improvement techniques, and is acknowledged to be the most cost-effective form of quality improvement when practiced effectively. However, traditional FTR methods such as inspection are very difficult to adopt in organizations: they introduce substantial new up-front costs, training, overhead, and group process obstacles. Sustained commitment from high-level management along with substantial resources is often necessary for successful technology transfer of FTR. Since 1991, we have been designing and evaluating a series of versions of a system called CSRS: an instrumented, computer-supported cooperative work environment for formal technical review. The current version of CSRS includes an FTR method definition language, which allows organizations to design their own FTR method, and to evolve it over time. This paper describes how our approach to computer supported FTR can address some of the issues in technology transfer of FTR.

Dadong Wan and Philip M. Johnson. Experiences with CLARE: a computer-supported collaborative learning environment. International Journal of Human-Computer Systems, 41:851-879, December 1994. [ .pdf ]

Current collaborative learning systems focus on maximizing shared information. However, “meaningful learning” is not simply information sharing but also knowledge construction. CLARE is a computer-supported learning environment that facilitates meaningful learning through collaborative knowledge construction. It provides a semi-formal representation language called RESRA and an explicit process model called SECAI. Experimental evaluation through 300 hours of classroom usage indicates that CLARE does support meaningful learning. It also shows that a major bottleneck to computer-mediated knowledge construction is summarization. Lessons learned through the design and evaluation of CLARE provide new insights into both collaborative learning systems and collaborative learning theories.

Philip M. Johnson. The Annotated Egret. Technical Report CSDL-94-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, June 1994.

Danu Tjahjono. Evaluating the cost-effectiveness of formal technical review factors. Ph.D. Dissertation Proposal CSDL-94-07, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, June 1994. [ .ps ]

The importance and benefits of formal technical review (FTR) as a method to improve software quality have been well documented, and yet there is a proliferation of the methods in practice with varying degrees of success. Worse, the same methods are often practiced inconsistently and the contribution of various review factors on review outcomes is also not currently understood. This research proposes a new approach to assess and study the cost-effectiveness of various review factors. Our basic approach is to first develop a framework that allows one to classify the similarities and differences of existing FTR methods from the perspective of their review processes. Specifically, the framework looks into important review factors that characterize a review process, such as the objective of a particular phase within the review process, the interaction mode among review participants and the technique being used during the phase. Second, we will develop a computer assisted review system, namely, CSRS version 3.0, that can be used as an experimental testbed for empirically evaluating different FTR factors that may impact the methods. Finally, we will design a control experiment to answer an important initial question concerning the cost-effectiveness of three different examination techniques commonly used in existing FTR methods: free review technique, selective test cases technique and stepwise verification technique.

Rosemary Andrada. Redefining the web: Creating a computer network community. M.S. Thesis Proposal CSDL-94-09, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 1994. [ .ps ]

Organizations are formed to accomplish a goal or mission, where individual members do their part and make a combined effort leading toward this goal. As the organization grows in size, the level of community inevitably deteriorates. This research will investigate the strengths and weaknesses of a computer-based approach to improving the sense of community within one organization, the Department of Computer Science at the University of Hawaii. We will assess the current level of community by administering a questionnaire to members of the department. Next, we will introduce a World Wide Web information system for and about the department in an effort to impact the level of community that exists. We will then administer another questionnaire to assess the level of community within the department after a period of use with the information system. We will analyze the results of both questionnaires and usage statistics logged by the system.

Philip M. Johnson. ECS design reference. Technical Report CSDL-94-13, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 1994.

Philip M. Johnson. Collaboration in the small vs. collaboration in the large. In Proceedings of the 1994 CSCW Workshop on Software Architectures for Cooperative Systems, Chapel Hill, N.C., October 1994. [ .ps ]

Carleton A. Moore. Supporting authoring and learning in a collaborative hypertext system: The Annotated Egret Navigator. M.S. Thesis Proposal CSDL-94-16, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 1994. [ .ps ]

1993

Philip M. Johnson and Danu Tjahjono. Improving software quality through computer supported collaborative review. In Proceedings of the Third European Conference on Computer Supported Cooperative Work, September Publications-Conferences 1993. [ .pdf ]

Formal technical review (FTR) is a cornerstone of software quality assurance. However, the labor intensive and manual nature of review, along with basic unresolved questions about its process and products, means that review is typically under-utilized or inefficiently applied within the software development process. This paper introduces CSRS, a computer-supported cooperative work environment for software review that improves the efficiency of review activities and supports empirical investigation of the appropriate parameters for review. The paper presents a typical scenario of CSRS in review, its data and process model, application to process maturation, relationship to other research, current status, and future directions.

Philip M. Johnson. Architectural concerns in EGRET. SIGOIS Bulletin, April 1993.

Danu Tjahjono. CSRS Design Specification. Technical Report CSDL-92-11, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 1993.

Dadong Wan. CLARE: A computer-supported collaborative learning environment based on the thematic structure of research and learning artifacts. Ph.D. Thesis Proposal CSDL-93-01, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, January 1993. [ .pdf ]

This research concerns the representation issue in collaborative learning environments. Our basic claim is that knowledge representation is not only fundamental to machine learning, as shown by AI researchers, but also essential to human learning, in particular, human metalearning. Few existing learning support systems, however, provide representations which help the learner make sense of and organize the subject content of learning, integrate a wide range of classroom activities, (e.g., reading, reviewing, writing, discussion), and compare and contrast various viewpoints from individual learners. It is our primary purpose to construct an example instance of such a representation, and to show that useful computational manipulations can be performed on it, and that the combination of the representation and related computational services can actually lead to the improved learner's performance on selected collaborative learning tasks.

Philip M. Johnson, Dadong Wan, Danu Tjahjono, Kiran Kavoori Ram, and Robert S. Brewer. EGRET requirements specification. Technical Report CSDL-93-02, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, April 1993. [ .ps ]

Dadong Wan. CLARE: a new approach to computer-supported collaborative learning. Technical Report CSDL-93-03, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 1993. [ .ps.gz ]

Danu Tjahjono. CSRS: a new approach to software review. Technical Report CSDL-93-04, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 1993.

Kiran Kavoori Ram. DSB: The next generation tool for software engineers. Technical Report CSDL-93-05, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 1993. [ .pdf ]

Robert S. Brewer. URN: A new way to think about Usenet. Technical Report CSDL-93-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 1993. [ .pdf ]

Philip M. Johnson, Danu Tjahjono, Dadong Wan, and Robert S. Brewer. Experiences with CSRS: An instrumented software review environment. In Proceedings of the Pacific Northwest Software Quality Conference, October 1993. [ .pdf ]

Formal technical review (FTR) is a cornerstone of software quality assurance. However, the labor-intensive and manual nature of review, along with basic unresolved questions about its process and products, means that review is typically under-utilized or inefficiently applied within the software development process. This paper discusses our initial experiments using CSRS, an instrumented, computer-supported cooperative work environment for software review that reduces the manual, labor-intensive nature of review activities and supports quantitative study of the process and products of review. Our results indicate that CSRS increases both the breadth and depth of information captured per person-hour of review time, and that its design captures interesting measures of review process, products, and effort.

Philip M. Johnson and Danu Tjahjono. CSRS User Guide. Technical Report CSDL-93-11, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 1993. [ .pdf ]

Philip M. Johnson. Improving software quality through formal technical review: A research agenda. Technical Report CSDL-93-12, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, October 1993. [ .ps ]

Dadong Wan. CLARE User's Guide. Technical Report CSDL-93-15, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, September 1993. [ .ps.gz ]

Johnny Li. Documentation for the XView graphical browser. Technical Report CSDL-93-16, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 1993.

Rosemary Andrada and Carleton A. Moore. Hyperbase server interface: Versions 2.0 and 2.1. Technical Report CSDL-93-18, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, October 1993. [ .pdf ]

Danu Tjahjono. Studying formal technical review methods using CSRS. Technical Report CSDL-93-19, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, September 1993. [ .ps ]

The importance of formal technical review and its benefits have been well documented, and yet there is a proliferation of methods in practice with varying degrees of success. This paper discusses a new approach to assess and study various aspects associated with the effectiveness of current review methods. Our basic approach is to use a computer assisted review system (CSRS) equipped with mechanisms to model different review methods and at the same time capture fine-grained measurements about the product and the process of the review. Through suitable experimental design, these data can be used to compare the different methods to each other.

Philip M. Johnson, Danu Tjahjono, Dadong Wan, and Robert S. Brewer. Gtables: From EGRET 2.x.x to EGRET 3.0.x. Technical Report CSDL-93-20, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, November 1993. [ .ps ]

Dadong Wan. CLARE 1.4.7 design document. Technical Report CSDL-93-24, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, September 1993. [ .ps.gz ]

1992

Philip M. Johnson. Supporting exploratory CSCW with the EGRET framework. In Proceedings of the 1992 Conference on Computer Supported Cooperative Work, November 1992.

Exploratory collaboration occurs in domains where the structure and process of group work evolves as an intrinsic part of the collaborative activity. Traditional database and hypertext structural models do not provide explicit support for collaborative exploration. The EGRET framework defines both a data and a process model along with supporting analysis techniques that provide novel support for exploratory collaboration. To do so, the EGRET framework breaks with traditional notions of the relationship between schema and instance structure. In EGRET, schema structure is viewed as a representation of the current state of consensus among collaborators, from which instance structure is allowed to depart in a controlled fashion. This paper discusses the issues of exploratory collaboration, the EGRET approach to its support, and the current status of this research.

Danu Tjahjono. Co2ReView: A collaborative code inspection and review environment. Technical Report CSDL-92-02, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, April 1992.

Dadong Wan and Philip M. Johnson. Supporting scientific learning and research review using COREVIEW. In Proceedings of the 1992 AAAI Workshop on Communicating Scientific and Technical Knowledge, July 1992. [ .pdf ]

Scientific learning and research are becoming increasingly computerized. More and more such activities are mediated through electronic artifacts. This paper presents an artifact-based system called COREVIEW, to be used in the domain of research seminars. The emphasis of our approach is on the centrality of textualized artifacts in seminar activities, the relationship between different types of artifacts, and the dynamic interactions among them over time. Our system provides explicit representation of research artifacts and their structures, and support for the process of collaborative artifact generation, integration, manipulation and utilization.

Philip M. Johnson. Collaborative software review for capturing design rationale. In Proceedings of the AAAI 1992 Workshop on Design Rationale, July 1992. [ .ps ]

Dadong Wan. Supporting collaborative learning through research reviews. Technical Report CSDL-92-05, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 1992.

Dadong Wan. COREVIEW: A tool for supporting collaborative learning in seminars. Technical Report CSDL-92-06, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, September 1992.

Philip M. Johnson. An architectural perspective on EGRET. In Proceedings of the 1992 CSCW Workshop on Tools and Technologies, November 1992. [ .ps ]

Philip M. Johnson. Reverse engineering collaboration structures in Usenet. Technical Report CSDL-92-10, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December 1992. [ http ]

1991

Philip M. Johnson. Introduction to the Collaborative Software Development Laboratory. Technical Report CSDL-91-01, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, February 1991.

Philip M. Johnson, Dadong Wan, and Danu Tjahjono. EGRET design specification: Version 2.0. Technical Report CSDL-91-02, Department of Information and Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 1991.

Philip M. Johnson. The EGRET project: Exploring open, evolutionary, and emergent collaborative systems. In Proceedings of the 1991 ECSCW Developer's Workshop, September 1991.