History, Mystery, and Ballast
John Allen, Santa Clara University (Santa Clara, CA)
slides audio
When I first showed up at Stanford some forty-plus years ago, I met Steve Russell's three-drawer filing cabinet with the labels "History", "Mystery", and "Ballast". In this paper I will use Steve's labels to revisit an early 1960s statement of John McCarthy: It is reasonable to hope that the relationship between computation and mathematical logic will be as fruitful in the next century as that between analysis and physics in the last. The development of this relationship demands a concern for both applications and for mathematical elegance. We begin with enough history-of analysis, physics, and traditional engineering-to motivate our contention that Software Engineering is destined to follow a similar path, moving from a craft-based trade to a theory-based discipline. Next, the mystery. We will argue that McCarthy's hope was not in vain. In particular that the relationship between computation and mathematical logic, as embodied in typed functional languages, offers both the mathematical elegance and the promise of practical benefits. Finally, the ballast. Beginning with an attitude in 1975, and expanding into a full-featured offering, Ruth Davis and I have developed a course at Santa Clara University that brings a logical approach to Software Engineering.
John Allen holds Bachelors and Masters degrees in mathematics. He did graduate work in mathematics at the University of Chicago, and in logic at Stanford; he also did research in Computer Science at Stanford's Artificial Intelligence Lab. He taught at the University of California at Los Angeles, and at California State University at San Jose. Periodically he teaches at Santa Clara University. He is the author of "Anatomy of Lisp", and was the organizer of the seminal, first Lisp Conference, held at Stanford University in 1980. His interests are in the mathematical foundations of Software Engineering.
The Legacy Of Lisp
Henry Baker, Baker Capital Corporation (Encino, CA)
slides audio
The Lisp language was invented by John McCarthy at MIT in the late 1950's and went on to become the "lingua franca" of "Artificial Intelligence" research during the following decades. In this talk, I give a highly personal view of the various features of Lisp that inspired Lisp's fanatical following, and try to place Lisp in the context of computing in the 21st century. In particular, I will give my personal view as to what went right with Lisp during the very important transition to commercial use, and what went wrong. Finally, I will offer some suggestions for computer languages of the future.
Henry Baker is a founding partner at Baker Capital, a private equity firm with $1.5 billion under management headquartered in New York City. Baker Capital invests in communications-related companies--examples include Akamai, Sand Video, and Permabit. Dr. Baker received his SB, MS, and PhD (1978) from MIT in the EECS Department and was an assistant professor in the CS Dept of the University of Rochester. While at MIT, Dr. Baker did research in parallel models of computation and in the implementation of computer languages, including real-time garbage collection. In 1980, Dr. Baker was one of a score of founders of Symbolics, Inc., a successful vendor of high-end workstations based on Lisp technology developed at MIT.
Common Larceny
Will Clinger, Northeastern University (Boston, MA)
slides audio
Common Larceny is an implementation of Scheme for Microsoft's Common Language Runtime (CLR), which is part of .NET. Common Larceny interoperates with other CLR languages using the JavaDot notation of JScheme, generating the JavaDot interfaces just in time via reflection, and caching them for performance. All other differences between Common Larceny, Petit Larceny, and Larceny derive from differences in the compiler's target language and architecture: IL/CLR, ANSI C, or machine code. The Larceny family therefore offers a case study in the suitability of these target architectures for Scheme.
William D Clinger received his BS and PhD degrees from Texas and MIT in 1975 and 1981. He has worked at Indiana University, Tektronix, Lightship Software, the University of Oregon, and Sun Microsystems, and has been a member of the computer science faculty at Northeastern University since 1994. His research interests center upon design, specification, and implementation of functional and higher-order programming languages. He has contributed to the development of Scheme, proved the correctness of a simple code generator and several compiler optimizations, and invented efficient algorithms for hygienic macro expansion, properly rounded conversions from decimal scientific notation to binary floating point, and older-first generational garbage collection.
Re-inventing Lisp for Ubiquity
Patrick Dussud, Microsoft Corporation (Redmond, WA)
slides audio
Despite the popularity of newer dynamic languages such as scripting languages (Perl, Python, Ruby, etc...) Java and C#, Lisp retains a significant advantage for code introspection and lightweight code generation. It also features a simple yet powerful runtime library. However, Lisp isn't well integrated in mainstream computing platforms (native, Java, .NET CLR) because of representation choices and other issues. I am arguing for a new, strongly typed Lisp language which integrates well with the Java platform or the .NET CLR. By eliminating its main weaknesses, it will allow for a renewal of the popularity of Lisp based on its unique strength.
Patrick Dussud holds a Masters degree from École Nationale Supérieure des Mines de Saint-Étienne, and has worked at Microsoft for the past 11 years. His responsibilities include designing garbage collectors and runtime architectures for language such as VBA, Jscript, MS Java and .NET CLR Currently, he is the Lead Architect for the .NET Common Language Runtime, the Chief Architect for WinFX, and a member of the Windows Architecture team helping with managed code related issues.
Before Microsoft, Patrick was the lead designer of the System Internals of the TI Explorer workstation, and re-engineered most of the rest of the runtime components, leading to a successful, stable Lisp system. His work on TICLOS was notable for its innovative solutions, and received accolades from other CLOS implementers. Later he worked at Lucid as the Chief Architect of Energize, a C++ programming environment motivated by Lisp-machine-like programming environments.
Conscientious Software
Richard Gabriel, Sun Labs (Menlo Park, CA)
audio
Responsibility for testing and devising how to install new software rests with the development team. However, people using the software every day must be able to shape and customize it without reliance on the software's original developers. Thus future innovations in software will have to produce systems that actively monitor their own activity and their environment, that continually perform self-testing, that catch errors and automatically recover from them, that automatically configure themselves during installation, that participate in their own development and customization, and that protect themselves from damage when patches and updates are installed. Such software systems will be self-contained, including within themselves their entire source code, code for testing, and anything else needed for their evolution.
Richard P. Gabriel is the award-winning author of four books, two poetry manuscripts, and over a hundred technical papers and essays. He lives in California.
Curl: A Content Language for the Web
Bert Halstead, Curl, Inc. (Cambridge, MA)
The Web has transformed how we access information and applications, but Web application platforms have lagged behind the vision: Highly usable Web applications are too hard to build using established Web technology platforms. Web content needs to span the spectrum from simple formatted text, through graphics, animations, and scripting, to enterprise-scale applications that use client-side computing. Curl is the world's first "content language" spanning this whole spectrum in one unified framework. The Curl language and implementation borrow from many intellectual traditions in computer science, notably including the Lisp tradition. The Curl product is now commercially available and is enjoying rapidly increasing adoption.
Bert Halstead is the chief architect at Curl Corporation, where he has worked for several years on the design and implementation of the Curl content language and its associated software. Bert's admiration for Lisp dates back to his undergraduate days in the 1970's. After receiving his Ph.D. degree from M.I.T. in 1979, Bert served as a member of the M.I.T. computer science faculty, where he coauthored a computer architecture textbook and developed the Multilisp programming language, which introduced the use of "futures" in a practical implementation of a parallel, Lisp-based programming language. Later, Bert worked as a research staff member at the Digital Equipment Corporation Cambridge Research Laboratory in Cambridge, Massachusetts, where he did research on tools to help programmers develop parallel programs and understand their behavior.
English as a Macro Language and Programming Environment for Lisp
Henry Lieberman, Massachusetts Institute of Technology (Cambridge, MA)
slides audio
It is often said that Lisp programmers don't write programs to solve problems; they use Lisp to write languages in which solutions to their problems become simple. People often find it easier to express solutions in English, so we may ask "Why not use English as a high-level solution language, letting the machine help turn it into code?" New advances in natural language processing and Common Sense reasoning now make it more feasible to use natural language in programming. We present Metaphor, a programming environment that acts like an out-liner for prose writing, providing "scaffolding" code that can later be elaborated into a full program.
Henry Lieberman has been a Research Scientist at the MIT Media Laboratory since 1987, where he directs the Software Agents group which is applying AI to user interfaces. He is editing a book on End-User Development (2005, Kluwer), and has previously edited "Spinning the Semantic Web" (2003, MIT Press) and "Your Wish is My Command" (2001, Morgan Kaufmann). He is currently working on Common Sense knowledge and reasoning for interactive applications. From 1972-87, he was a researcher at the MIT Artificial Intelligence Laboratory (now CSAIL). During that time he worked with Carl Hewitt on "actors", an early object-oriented, parallel language; and also he developed the notion of prototype object systems as well as the first real-time garbage collection algorithm. Earlier, he worked with Seymour Papert's group which developed the educational language Logo. He holds a doctoral-equivalent degree (Habilitation) from the University of Paris VI and was a Visiting Professor there in 1989-90.
Beyond Lisp
John McCarthy, Stanford University (Stanford, CA)
Lisp has survived since 1959. Common Lisp and Scheme are also quite old - 1980s I think. Any improvement on Lisp should preserve the fact that Lisp programs are Lisp data. I'll discuss extending Lisp in the direction of putting logical assertions in - full first and maybe second order logic with a "heavy duty" set theory.
John McCarthy is Professor of Computer Science at Stanford University. He has been interested in artificial intelligence since 1948 and coined the term in 1955. His main artificial intelligence research area has been the formalization of common sense knowledge. He invented the LISP programming language in 1958, developed the concept of time-sharing in the late fifties and early sixties, and has worked on proving that computer programs meet their specifications since the early sixties. He invented the circumscription method of non-monotonic reasoning in 1978.
McCarthy received the A. M. Turing award of the Association for Computing Machinery in 1971 and was elected President of the American Association for Artificial Intelligence for 1983-84 and is a Fellow of that organization. He received the first Research Excellence Award of the International Joint Conference on Artificial Intelligence in 1985, the Kyoto Prize of the Inamori Foundation in November 1988, and the National Medal of Science in 1990. He is a member of the American Academy of Arts and Sciences, the National Academy of Engineering and the National Academy of Sciences. He has received honorary degrees from Linköping University in Sweden, the Polytechnic University of Madrid, Colby College, Trinity College, Dublin and Concordia University in Montreal, Canada. He has been declared a Distinguished Alumnus by the California Institute of Technology.
Correctness-by-Construction is in your future
James McDonald, Kestrel Institute (Palo Alto, CA)
The size of software/hardware systems and applications is accelerating at a rate that threatens to permanently outpace our ability to verify them. Thus an increasingly important need arises to certify crucial properties of vital or ubiquitous applications. Given such pressures, we make a case that a correct-by-construction approach is the only realistic game in town. Hence anyone (or any company) involved in software production should be prepared to adjust to a not-too-distant world in which that will be the dominant software/hardware paradigm. Specific success stories are provided.
James McDonald has worked at Kestrel Institute since 1992, where he designed and implemented Specware, a tool for correct-by-construction synthesis. While contributing to its continued development, he has served as Principal Investigator on several Specware-based projects, and is a founder of Kestrel Technology, LLC, created to commercialize such technology.
McDonald was a founder of Lucid, Inc. in 1984, where he implemented much of Lucid Common Lisp and managed technology transfer to about 30 platforms. Earlier at Stanford, he worked on Meta-Dendral for scientific theory formation, and on VALID and XCHECK, automated instructors for Stanford's introductory logic and set theory courses. He ported PSL to VM/CMS for IBM.
McDonald received his Bachelors of Science from Michigan State University in 1974 as an Alumni Distinguished Scholar, and did graduate work at Stanford in later years, entering the PhD program in Artificial Intelligence.
How Lisp will Save the World
Jeff Shrager, Carnegie Institute of Washington, Department of Plant Biology (Stanford University, Stanford, CA)
slides audio
At my ILC2002 talk, "Symbolic Computing in the Age of Biological Knowledge", I said that biology "is guided by an ocean of knowledge," and that symbolic computing would "soon be paramount in every aspect of biology." As the hurricane of activity in knowledge-based bioinformatics has demonstrated, no more prophetic words have been spoken (at any ALU conference)! This year I'll review some of the most interesting aspects of recent activity in symbolic biocomputing. I'll explain why these problems are important, and why, as I said in 2002, "Symbolic Computing [aka. Lisp] will save biology, and biology will save Symbolic Computing."
Jeff Shrager is a research fellow at The Carnegie Institute of Washington, Department of Plant Biology where he studies Cyanobacteria (the most important organisms on earth), a consulting Associate Professor in Symbolic Systems at Stanford where he teaches interaction analysis and symbolic biocomputing (separately), a research scientist at The Inst. for the Study of Learning and Expertise where he works on scientific discovery by humans and computers (jointly!), and a member of the research staff at PARC where he works on collaborative knowledge-based decision-making. He probably has several other appointments that he can't remember where he does something else altogether.
A Mechanized Program Verifier
J Strother Moore, University of Texas (Austin, TX)
Following advances in machine-aided reasoning over the years, the "Boyer-Moore Project" has supplied many success stories in mechanized proofs of correctness, including verification of: A Berkeley C String Library compiled by gcc into Motorola 68020 machine code; the IEEE compliance of AMD floating-point units (before fabrication); and important properties of the Java bytecode verifier and class loader (having been modeled in Lisp.) Thus we may ask: "Why is Lisp so uncommonly effective as the basis of a mechanized theory of computation?" We will show how our heuristics for discovering many simple proofs permit users to guide the system into truly complex proofs, which when combined with the classic power of Lisp systems, result in a very effective tool.
J Strother Moore received his BS degree from MIT in 1970, and his PhD from the University of Edinburgh in 1973. He currently holds the Admiral B.R. Inman Centennial Chair in Computing Theory at the University of Texas at Austin, and is also department chair. Moore was a founder of Computational Logic, Inc., and served as its chief scientist for ten years. He and Bob Boyer jointly developed the Boyer-Moore theorem prover and the Boyer-Moore fast string searching algorithm. Additionally, he and Matt Kaufmann are co-authors of the ACL2 theorem prover. He and Bob Boyer were awarded the 1991 Current Prize in Automatic Theorem Proving by the American Mathematical Society. In 1999 they were awarded the Herbrand Award for their work in automatic theorem proving. Moore is a Fellow of the American Association for Artificial Intelligence.
AllegroCache: A High-Performance Object Database for Large Complex Problems
Jans Aasman, Franz, Inc. (Oakland, CA)
slides audio
AllegroCache, Franz Inc's new persistent object-oriented layer on top of CLOS, provides a persistent object extension to Common Lisp and is also a full-fledged, industrial-grade database. It is most suitable for commercial, large-scale and complex applications, where classical relational databases cannot be used effectively. Also, it runs on all 32- and 64-bit platforms supported by Allegro CL. The paper discusses Franz's Common Lisp based Btree system that underlies AllegroCache, as well as AllegroCache's current feature set and road map for future development. It concludes with some applications and prototypes that have already benefited significantly from the capabilities of AllegroCache.
Jans Aasman started out as an experimental and cognitive psychologist. He earned his Ph.D in cognitive science with a detailed model of car driver behavior using Lisp and Soar. He spent most of his professional life in telecommunications research, specializing in intelligent user interfaces and applied artificial intelligence projects. From 1995 to 2004 he was also a part-time professor in the Industrial Design department of the Technical University of Delft. Jans joined Franz Inc. in 2004, and is currently its Director of Engineering.
Optimizing Numerical Performance with Symbolic Computation
John Amuedo, Signal Inference Corporation (Los Angeles, CA)
Lambda the (ultimate) processing element? Techniques are presented for optimizing large-grid physical simulations (e.g., numerical solution of Navier-Stokes or Maxwell's equations on irregular domains). Uniform properties of a physical domain are described functionally by composing aggregate-wise operations on sub-grids. Non-uniformities are modeled element-wise, using compiled lambda expressions. This strategy appears to provide several performance benefits, including localization of grid element state in physical memory, and physical address resolution at compile-time. In many cases, overhead associated with iteration across multi-dimensional data using nested loops in conventionally organized physical simulations can be greatly reduced or eliminated. Compiled Lambda expressions that simulate single grid elements or sub-grids may be scheduled autonomously for parallel execution. They may also be evaluated on demand in any physically consistent order. This capability is useful for cellular simulation of wavefront propagation.
John Amuedo is founder of Signal Inference, a Los Angeles engineering firm that provides computational support for scientific and entertainment industries. As Research Scientist at the MIT AI Lab, John founded the Music Cognition Group with Marvin Minsky and pioneered techniques for automated composition of music from symbolic specifications, and knowledge-based strategies for analyzing superimposed acoustic signals. John has also worked to improve software development practices in the DARPA HPCS and national laboratory communities. His current research areas include spectral methods for numerical solution of fluiddynamic and propagating wave problems, classification of acoustic signals, accurate time integration methods, and augmenting Lisp as an extensible, high-productivity replacement for Matlab.
GOALIE: A Common Lisp Application to Discover Kripke Models
Marco Antoniotti, Courant Institute of Mathematical Sciences (New York, NY)
Naren Ramakhrisnan, Courant Institute of Mathematical Sciences (New York, NY)
Bud Mishra, Courant Institute of Mathematical Sciences (New York, NY)
GOALIE is a Common Lisp application that redescribes numerical gene expression value measurements into formal temporal logic models of biological processes. It finds extensive uses in the analysis of microarray and other high-throughput biological data sets. GOALIE incorporates several statistical, logical, and ontological modules, connected together through an architecture that exploits various features of several Common Lisp libraries in order to smoothly integrate with popular bioinformatics formats and databases--the most notable example being the Gene Ontology (GO) with the associated GO database.
Marco Antoniotti is a Senior Research Scientist in the NYU Courant Bioinformatics Group. His interests concentrate in the field of Computational and Systems Biology. He is the author or co-author of several software systems--SHIFT from UC Berkeley, Jester from PARADES, and Simpathica from NYU--and co-authored two patents in the field of Genomics Optical Mapping. He received his PhD from New York University in 1995.
Naren Ramakrishnan is an associate professor of computer science at Virginia Tech. His research interests span computational science, mining scientific data, and information personalization. He is the recipient of a 2000 NSF CAREER grant, and the 2001 New Century Technology Council Innovation award He currently serves on the editorial board of IEEE Computer. Ramakrishnan received his PhD in computer sciences from Purdue in 1997.
Bud Mishra is a professor of computer science and mathematics at NYU's Courant Institute, and a professor of cell biology at NYU School of Medicine. He has developed sophisticated algorithms for problems that range from deciphering the genome of pathogens (E. coli, P. falciparum, etc.) to understanding chromosomal aberrations that are implicated in cancer. His most recent focus has been on a bioinformatics environment, dubbed Valis, supplying better computational tools. Mishra received his MS and PhD in Computer Science from Carnegie-Mellon University.
Application Development in CLOS/CLIM to Delivery on Multiple Platforms
Sheldon Ball, VA Greater Los Angeles Health Care System (Los Angeles, CA)
audio
Anvita eReference is a prototypical electronic reference integrating clinical medicine and basic biomedical science. It provides a common interface to: 1) decision support for the diagnosis and management of disease; 2) representations of the molecular basis of disease (molecular pathology); and 3) a database of the basic biomedical sciences (i.e. anatomy, physiology, biochemistry). Fundamental issues addressed in this project are knowledge representation and interface development in the domains of basic science and clinical medicine and seamlessly integrating the two. Although the major focus of Anvita is internal medicine and molecular pathology, the underlying framework is adaptable to other scientific domains.
Dr. Sheldon Ball received his B.S. (1974) and Ph.D (1978) in Chemistry from the University of California at Davis, and his M.D. (1983) from the University of Miami. He has received Medical Board Certification in Clinical Pathology (1989), Internal Medicine (1999), and Geriatrics (2002); and his work experience includes: Research Pathologist (UCLA), Assistant Professor of Pathology (University of Mississippi 1990-1994), Assistant Professor of Pathology (Medical College of Pennsylvania 1994-1996), Adult Medicine Physician (Kaiser Permanente 1999-2001), and Geriatric Fellow (UCLA 2001-2002). Ball is currently Special Fellow in Advanced Geriatrics and Gerontology, Veterans Administration (GRECC). He has been programming in Common Lisp since 1988.
Common Lisp For Java: An Intertwined Implementation
Jerry Boetje, College of Charlston (Charlston, South Carolina)
slides audio
Common Lisp for Java is an open-source implementation of Common Lisp that executes in the Java Runtime Environment. Under development by undergraduates over multiple semesters, CLforJava differs from other implementations in its meshing of the two languages without a Foreign Function Interface. Following the natural techniques of each language, CLforJava has a Java API to Lisp components and Java methods are accessible via generic functions. Documentation from CL forms and doc strings is stored in XML enabling generation of documentation via XSL transformations. Internationalization support is provided through integration with Unicode 4.0, locale-based comparisons, a file encoding type, and non-Western character symbol names and digits.
Jerry Boetje holds SB and SM degrees from MIT (1972) and has done additional CS graduate work at Brown University in 1986/7. He started his career participating in system designs at Draper Laboratory in Cambridge, MA. A developer of VAX LISP, he was an architect and director of software engineering for a number of companies on the East and West coasts. He is now an Instructor in the CS department of the College of Charleston (SC), working with undergraduate and graduate students.
Unicode 4.0 in Common Lisp: Adoption of Unicode 4.0 in CLforJava
Jerry Boetje, College of Charlston (Charlston, South Carolina)
slides audio
The Common Lisp standard contains assumptions regarding ordering, case, naming, and reader attributes appropriate to ASCII encodings. Other weaknesses are its limited view of character and string ordering and the domain of characters acceptable to the Reader. Supporting Unicode 4.0, CLforJava provides rational, backward-compatible extensions to the Common Lisp standard for characters and strings. Using the Java 1.5 features, CLforJava supports 21-bit characters, code points, conformance to the approximately 15,000 named characters, locale-based comparisons and formatting, I/O operations using any of the IANA-defined file encodings, and non-Western symbol names and digits. We conclude with a proposal for updating the Common Lisp standard to the global environment.
Jerry Boetje holds SB and SM degrees from MIT (1972) and has done additional CS graduate work at Brown University in 1986/7. He started his career participating in system designs at Draper Laboratory in Cambridge, MA. A developer of VAX LISP, he was an architect and director of software engineering for a number of companies on the East and West coasts. He is now an Instructor in the CS department of the College of Charleston (SC), working with undergraduate and graduate students.
Mixing Lisps in Kawa
Per Bothner, Consultant (San Jose, CA)
slides audio
Kawa started as a Scheme implementation written in Java, based on compiling Scheme forms to Java bytecodes. It has developed into a powerful Scheme dialect whose strengths include speed and easy access to Java classes. It is Free Software that some companies depend on. The Kawa compiler and run-time environment has been generalized to implement other languages besides Scheme, both in the Lisp family (Emacs Lisp, Common Lisp, and BRL), and outside it (XQuery, Nice). This paper focus on the differences and challenges of implementing Common Lisp (not usable yet) and Emacs Lisp, which supports the JEmacs editor.
Per Bothner received degrees from University of Oslo and Stanford (Ph.D, 1988). He was an early employee at Cygnus Support, the pioneering company based on Free Software, where he worked on a number of projects. He was the designer and technical lead of Gcj, a Java ahead-of-time compiler based on GCC. Per also developed Kawa, which compiles Scheme functions on-the-fly into Java classes. The Kawa framework is also being used to implement other languages, including Emacs Lisp (JEmacs), Common Lisp, and XQuery. Per now works in the San José area as a consultant supporting Kawa.
LLAVA: Java in Lisp Syntax
Harold Carr, Sun Microsystems (Santa Clara, CA)
slides audio
LLAVA is Java in Lisp (lack of) syntax (rather than a Lisp or Scheme written in Java). LLAVA does not contain special syntax or functions to call Java methods nor does it define an orthogonal set of types (such as Scheme strings or Common Lisp arrays). Instead, LLAVA is Java expressed in typical prefix list syntax with all data being native Java data types (e.g., instances of Java classes). LLAVA adds additional types (e.g., PAIR, PROCEDURE, SYMBOL and SYNTAX) to enable one to work with lists and to define procedures and macros.
Dr. Harold Carr is a Senior Staff Engineer with Sun Microsystems. He has fifteen years of experience in distributed computing. He helped write the OMG Portable Object Adapter specification and was chairperson of the OMG Portable Interceptor specification. He is responsible for the core PEPT messaging architecture of the Object Request Broker and of JAX-RPC in both J2EE and J2SE.
At the University of Utah, Dr. Carr worked on Portable Standard Lisp, Utah Common Lisp, Concurrent Utah Scheme and Distributed C++ with Hewlett-Packard Research Laboratories and Schlumberger Research Laboratories. He was Chief Architect of Visual Lisp technology at Autodesk, and was a logic simulation consultant for Cirrus Logic.
A Timely Knowledge-Based Engineering Platform for Collaborative Engineering and Multidisciplinary Optimization of Robust Affordable Systems
David Cooper, Genworks International (Birmingham, MI)
Since the 1980s, Knowledge-Based Engineering (KBE) technology has been used to capture and automate design and engineering in industries such as aircraft and automobiles. The GDL platform from Genworks International represents a "next generation" KBE toolkit.
GDL provides broader benefits than traditional KBE tools, including: Portable web-based development and runtime environments; compatibility with contemporary data exchange formats; independence from proprietary CAD systems; and robust underlying commercial components (Allegro CL and SMLib surface/solid modeling). GDL provides automatic caching and dependency tracking for tractable runtime performance of large models, minimal source code volume, and efficient model development and debugging.
David J. Cooper, Jr. is the President and Chief Engineer of Genworks International, a premier vendor of a Knowledge Base language and development tool useful for automating engineering and business processes. Mr. Cooper has been with Genworks since 1997. Before that, he spent about five years with the Ford Motor Company in the Knowledge Based Engineering Department. He has been a principal presenter at ICAD, Lisp, and KBE Conferences worldwide, most recently at the OMG Technical Meeting for KBE Standardization in San Francisco. Mr. Cooper holds the following degrees from the University of Michigan: a BA in German, a BS in Computer Science, and a Masters in Computer Science.
How to Make Lisp More Special
Pascal Costanza, Vrije Universiteit Brussel (Brussels, Belgium)
slides audio
Common Lisp provides generalized places that can be assigned via the SETF macro, and provides ways to hook into SETF. Past attempts to extend this to rebinding places, similar to "special" variables, lead to potentially incorrect behavior: New "bindings" are created by side effects, and therefore affect all threads that access the same places. Instead of storing values directly we can store symbols in those places and associate the values as symbol values. This allows us to confine new bindings to the current thread. As an illustration, I provide a DLETF framework and a CLOS metaclass SPECIAL-CLASS.
Pascal Costanza has a Ph. D. degree from the University of Bonn, Germany, and is a research assistant at the Programming Technology Lab of the Vrije Universiteit Brussel, Belgium. His past involvements include specification and implementation of the languages Gilgul and Lava, and the design and application of the JMangler framework for load-time transformation of Java class files. He has also implemented aspect-oriented extensions for CLOS, and currently explores possibilities for making object-oriented programs better adaptable to the context of their use. He is furthermore the initiator and lead of Closer, an open source project that provides a compatibility layer for the CLOS MOP across multiple Common Lisp implementations. He has also co-organized numerous workshops on Unanticipated Software Evolution, Aspect-Oriented Programming, Object Technology for Ambient Intelligence, Lisp, and redefinition of computing.
Functional Programming for Signal Processing: There's More to Life than Inner Loops
Roger Dannenberg, Carnegie Mellon University (Pittsburgh, PA)
slides audio
Nyquist is a very high-level, Lisp-based language for signal processing and music composition. Although one might expect a language relying on garbage collection, lazy evaluation, and interpreted Lisp to suffer in performance, Nyquist is usually faster than C-based alternatives. Two factors account for this: (1) Most computation is in inner signal-processing loops, which are compiled from high-level descriptions and use techniques that would be too tedious to code by hand. (2) The expressiveness of Nyquist allows mixed sample rates, sample caching, and other algorithmic optimizations. Thus, high-level language features make Nyquist more usable and more efficient.
Dr. Roger B. Dannenberg is an Associate Research Professor of Computer Science and Art on the faculty of the School of Computer Science and School of Art at Carnegie Mellon University, where he is also a fellow of the Studio for Creative Inquiry. Dannenberg is well known for his computer music research, especially in programming languages and real-time interactive systems. His pioneering work in computer accompaniment led to the SmartMusic system now used by tens of thousands of music students. As a trumpet player, he has performed in concert halls ranging from the historic Apollo Theater in Harlem to the Espace de Projection in Paris.
The GNU ANSI Common Lisp Test Suite
Paul Dietz, Motorola Global Software Group (Schaumburg, IL)
slides audio
ANSI Common Lisp is blessed to have a large, comprehensive standard. As part of the effort to bring GNU Common Lisp (GCL) into compliance with this standard, a large test suite has been constructed. The suite, distributed as part of the GCL source tree, contains over 20,000 individual tests. In addition, it has two varieties of randomized high volume testers for lisp compilers; these have quickly found compiler bugs in every implementation on which they have been run. The paper also discusses issues with the ANSI CL specification itself revealed during the writing of the test suite.
Paul Dietz is a software engineer at Motorola in the Software Design Automation group. He received his doctorate in computer science in 1984 from Cornell, and has been using lisp for three decades. His interests include algorithms, compilers, and software testing.
Langutils: A Fast Natural Language Toolkit for Common Lisp
Ian Eslick, MIT Media Lab (Cambridge, MA)
Hugo Liu, MIT Media Lab (Cambridge, MA)
In recent years, Natural Language Processing (NLP) has emerged as an important capability in many applications and areas of research. Natural language can be both the domain of an application as well as an important part of the human-computer interface. This paper describes the design and implementation of "langutils", a high performance natural language toolkit for Common Lisp. We introduce the problems of real-world NLP and explore trade-offs in the representation and implementation of tokenization, POS tagging and phrase extraction. The paper concludes with a discussion of the use of the toolkit in two natural language applications.
Ian Eslick is a Master's candidate in the Commonsense Computing group of the MIT Media Laboratory. He is researching the role of "knowledge rich engineering" in imbuing computers with commonsense reasoning capabilities. Prior to joining the Media Lab, Eslick was an entrepreneur in the semiconductor and telecommunications industries and he most recently served as a Director of Engineering at Broadcom Corporation.
Hugo Liu is a PhD student at the MIT Media Lab. His research applies machine learning and narrative understanding to the computational modeling of phenomena such as, inter alia, aesthetics, culture, personality & attitudes, identity, gastronomy, and imagination. His computational approach to understanding and modeling the self is most influenced by the speculative cognitive architectures of Deleuze, Bergson, Kierkegaard, and Nietzsche. Hugo writes Python but thinks in the lambda calculus.
Using delayed streams to discern changing conditions in complex environments: Monitors in Apex 3.0
Will Fitzgerald, NASA Ames Research Center (Mountain View, CA)
Michael Freed NASA Ames Research Center (Mountain View, CA)
Apex is a NASA project that provides a number of components for creating and modeling intelligent agents, used for applications from human simulation to controlling an autonomous rotorcraft. Much of the core technology is written in Common Lisp. The most recent version of Apex, version 3.0, provides new capabilities for monitoring changing conditions in complex environments. In this paper, we describe these capabilities, focusing especially on the use of delayed streams for efficient computation of conditions based on state variable histories.
Will Fitzgerald is an R&D scientist at NASA Ames Research Center, where he works on the Apex software architecture. Michael Freed, at Ames by appointment through the Institute for Human and Machine Cognition, is the project lead and original developer of Apex and also serves as autonomy lead for the NASA/Army Autonomous Rotorcraft Project.
CLFD: A Finite Domain Constraint Solver in Common Lisp
Stephan Frank, Technical University Berlin (Berlin, Germany)
Petra Hofstedt, Technical University Berlin (Berlin, Germany)
slides
In recent years constraint programming has gained much interest and is to a certain extent present in almost every Prolog system. There are also constraint libraries for imperative languages like Java. In the lisp world SCREAMER is the established tool for finite domain and interval constraint solving. However, current research has developed efficient pruning techniques for powerful global constraints like the all-different or global cardinality constraints. Such constraints are not easy to integrate into SCREAMER. We present CLFD to provide the base for a more extensible finite domain constraint solver in Common Lisp. The modular design makes it possible to experiment with different and enhanced domain representation, pruning and search techniques.
Stephan Frank received his diploma in computer science from the Technical University Berlin in 2002. Since then he has been pursuing his PhD research in the area of constraint solving and language integration. In an earlier life he worked for several years for a small computer graphics company in Berlin.
Petra Hofstedt received her diploma in computer science from Technical University Dresden in 1995. After that she researched in the areas of parallel functional programming languages and constraint solving. She received her PhD Degree from the TU Dresden in 2001 for her work on constraint solver combination and coordination. Since 1999 she is a member of the compiler construction group at the Technical University Berlin in the area of language integration.
Implementing S-expression Based Extended Languages in Lisp
Tasuku Hiraishi, Kyoto University, Graduate School of Informatics (Kyoto, Japan)
Masahiro Yasugi, Kyoto University, Graduate School of Informatics (Kyoto, Japan)
Taiichi Yuasa, Kyoto University, Graduate School of Informatics (Kyoto, Japan)
slides audio
Many extended, C-like languages can be implemented by translating them into C. This paper proposes an extension scheme for SC languages (extended/plain C languages with an S-expression based syntax). The extensions are implemented by transformation rules over S-expressions, that is, Lisp functions with pattern-matching on S-expressions. Thus, many flexible extensions to C can be implemented at low cost because (1) of the ease with which new constructs can be added to an SC language, and (2) of the pre-existing Common Lisp capabilities for reading/printing, analyzing, and transforming S-expressions themselves. We also present a practical example of just such an extended language.
Tasuku Hiraishi received a Bachelors degree in engineering in 2003, and a Master of Informatics in 2005, both from Kyoto University, He is currently a PhD candidate at Kyoto's Graduate School of Informatics. His research interests include programming languages and parallel processing. He is a student member of Japan Society for Software Science and Technology.
Masahiro Yasugi received a Bachelors degree in electronic engineering, a Masters degree in electrical engineering, and a Ph.D. degree in information science from the University of Tokyo in 1989, 1991 and 1994 respectively. In the mid-1990's he was a fellow of JSPS (at the University of Tokyo and University of Manchester). Since 1998, he is an Assistant Professor at Kyoto University, and an Associate Professor since 2003. His research interests include programming languages and parallel processing.
Taiichi Yuasa received a Bachelor of Mathematics degree in 1977, Master of Mathematical Sciences degree in 1979, and the Doctor of Science degree in 1987, all from Kyoto University. He joined the faculty of the Research Institute for Mathematical Sciences, Kyoto University, in 1982. He is currently a Professor at the Graduate School of Informatics, Kyoto University. His current area of interest include symbolic computation and programming language systems.
A Model-Based Architecture for Entertainment Applications
Matthias Hölzl, Institut für Informatik, Ludwig-Maximilians-Universität (Munich, Germany)
Many current single-player games manage to create the illusion of a rich environment. But with few exceptions these games rely on relatively basic mechanisms like finite state machines and on scripted responses; therefore they have to constrict player behavior. We are currently building an edutainment program that allows a player to explore a large space of possible scenarios without undue restrictions. This is achieved by an architecture built on a model-based reasoning system combined with planning and decision components. The reasoning component is provided by the Snark theorem prover extended with procedural attachments for diachronic reasoning.
Matthias Hölzl received his diploma in mathematics in 1999 and his doctorate in computer science in 2001. He currently works as a researcher at Ludwig-Maximilians-Universität München in the group on Programming and Software Engineering (PST). His recent research has focused on dynamic programming languages, automated reasoning tools, and distributed systems.
A Framework for Dynamic Service-Oriented Architectures
Matthias Hölzl, Institut fur Informatik, Ludwig-Maximilians-Universitat (Munich, Germany)
Conventional development approaches for service-oriented applications force programmers to deal with many low-level issues and tie the application structure to a particular flavor of service-oriented computing. We describe a framework for service-oriented architectures that leverages the facilities of dynamic languages to avoid these problems. Our framework allows the programmer to use services explicitly where this is necessary, but otherwise hides the intricacies of service discovery and inter-service communication. This is achieved by the introduction of layered protocols that can be specialized at the appropriate level and by making use of introspective and reflective language features.
Matthias Hölzl received his diploma in mathematics in 1999 and his doctorate in computer science in 2001. He currently works as a researcher at Ludwig-Maximilians-Universität München in the group on Programming and Software Engineering (PST). His recent research has focused on dynamic programming languages, automated reasoning tools, and distributed systems.
Rapid Data Prototyping: Crafting Directionless Data Into Useful Information
Rusty Johnson, Northrop Grumman Mission Systems (Falls Church, VA)
Peter Lindahl, Northrop Grumman Mission Systems (Falls Church, VA)
William Anderson, Mystech Associates (Falls Church, VA)
slides audio
Increasing demands for meaningful information extracted from constantly changing data requires the use of fast and flexible tools to engage subject matter experts in the solution construction and refinement process. Data abstraction must support the developer in the "tactile" handling of the data and domain information. The functional programming style of Common Lisp and bottom-up programming techniques provide a natural base for encapsulating domain-specific relationships. Direct use of the domain concepts by developers and applications builds confidence in the quality and accuracy of the final products.
Rusty Johnson and Peter Lindahl were educated in Computer Science at Kansas State University. They are employed as Lisp Developers by Northrop Grumman for well more than a decade and have worked on various government projects, including the Whitehouse Publications Server.
William Anderson, deceased, was educated in Computer Science at Brigham Young and Kansas State Universities. He shared many employers with the other two authors. Bill is credited in this paper for his involvement in the conceptualization and design of Rapid Data Prototyping techniques. His presence has been missed.
i-dialogue: Modelling Agent Conversation by Streams and Lazy Evaluation
Clement Jonquet, University of Montpellier II & CNRS (Montpellier, France)
slides audio
i-dialogue defines and exemplifies a new computational abstraction which aims to model communicative situations such as those where an agent conducts multiple concurrent conversations with other agents. The i-dialogue abstraction is inspired both by the dialogue abstraction proposed by O'Donnell in 1985 and by the agent representation and communication model, STROBE (for STReam, OBject, Environment) proposed by Cerri in 1996. i-dialogue models conversations among processes by means of fundamental constructs of applicative/functional languages. (i.e. streams, lazy evaluation and higher order functions). The i-dialogue abstraction is adequate for representing multi-agent concurrent asynchronous communication such as it can occur in service providing scenarios on today's Web or Grid.
Clement Jonquet and Pr. Stefano A. Cerri are both PhD student and professor in computer science at University Montpellier II, in the Laboratory of Informatics, Robotics, and Microelectronics of Montpellier (LIRMM), France. They are members of the Social Informatics/Kayou team, interested in agents, Web and Grid services, CSP, machine learning, logic etc. They proposed the STROBE model as a representation and communication agent model highly inspired by applicative/functional languages constructs such as the environment model of evaluation, streams, object as procedure, first-class constructs, continuations, Read-Eval-Print loops etc. They are also member of the ELeGI project which aims to promote a paradigm shift in e-learning to a collaborative and highly interactive construction of knowledge, based both on Grid technologies and the enactment of the concept of service.
Common Lisp USB Communications Library
Drew Jacobs, Villanova University (Villanova, PA)
Brian Jorgage, Villanova University (Villanova, PA)
Frank Klassner, Villanova University (Villanova, PA)
audio
We describe Common Lisp USB Communication Library. It provides programmers with a set of methods for interacting with USB devices via byte-streams connected to device endpoints. We present this library as a first step toward the development of a USB standard for Common Lisp. The library has been developed for use in Common Lisp environments on Windows and Mac OS X.
Drew Jacobs is a MS computer science student at Villanova University. He received his BS in computer science from Villanova. His research interests include planners for intelligent gaming agents.
Brian Jorgage earned his BS in Electrical Engineering from Drexel University. He is currently a MS computer science student at Villanova University. His research interests lie in the area of USB communication.
Frank Klassner earned a BS in computer science and a BS in electronics engineering from the University of Scranton. He earned his MS and PhD in computer science from the University of Massachusetts at Amherst. He is an associate professor in Villanova University's Department of Computing Sciences. In addition to Lisp promotion and development his interests include AI, robotics, adaptive signal processing, and computer science.
The Memory Organization Package (MOP) for Web Agents
Seiji Koide, Galaxy Express Corporation (Tokyo, Japan)
audio
A Web Service Agent that automatically discovers, composes, and invokes Web Services actions must be aware of, and adaptive to, the open and unstable environment of the world-wide Web. In this paper, we address the architecture and components of such a Web Services Agent, including a planner, an executer, the memory, and an interface. First, we will briefly describe SWCLOS, which is a Semantic Web Processor on top of CLOS. We will focus on the semantic gap between RDF/OWL and CLOS, and on the realization of Case-Based memory using the Memory Organization Package (MOP, by Schank et al.) also built on top of SWCLOS. The agent memory maintains reflective data from the outside world, instantiates abstract plans generated by the planner, and hands off executable procedures to the executer.
Seiji Koide, General Manager of Galaxy Express Corporation in Japan, has developed SWCLOS, the Semantic Web Processor on top of CLOS, whose details are presented at a tutorial in this conference. Currently he is directing the Japanese National Project titled "Building Support Systems for Large-Scale Systems Using Information Technology", the goal of which is to develop a decision support system, using Semantic Web Services, for rocket launch operations.
Three Application Stories using Lisp
Hisao Kuroda, Mathematical Systems Inc. (Tokyo, Japan)
In this paper, I would like to introduce three Lisp projects I developed during the Year 2004. They are (1) a car-crash testing database system for Honda R&D, (2) a radiation monitoring system for SPring-8 and (3) an intelligent agent system for Nippon Telegraph and Telephone Corporation. All three applications have generated profits for their respective companies and, since installation, operate non-stop (24/7) without a single instance of downtime.
Each application includes increasingly popular programming features such as extensive numerical computations, video data processing (playing), DBMS interfaces, client-server operations, HTTP interface with session controlling, remote procedure calls, distributed objects in a network, knowledge representations and so on.
The paper describes this in more detail. In the process, I hope to convey how incredibly useful Lisp can be for general purpose programming tasks.
KURODA Hisao works for Mathematical Systems Inc. in Tokyo, and is manager of the Knowledge Engineering Division. His interests are in programming languages, intelligent computer systems, and Lisp, which he first met up with almost 20 years as Franz Lisp on BSD4.1. For the past six years, Kuroda (as he is known to his English-speaking colleagues) has been using Common Lisp to develop commercial applications. His successful projects include: a C compiler for NTT's parallel computer, a design check system for the piping in an atomic plant, and a legal expert system using Prolog. Recently, Kuroda was appointed to the Board of Directors of the ALU, the first such member from an Asian country.
Extensions to LISP to Support the Design of Electronic Circuits and Systems
Martin Mallinson
slides audio code
LISP as an electronic system development tool is described in detail. Includes support for new number types, closures to model hardware elements, functions for time and frequency domain analysis. Includes a scientific data plotter and schematic entry system built on a generic presentation substrate. Dynamic dialog box generation builds native Windows interface elements at run time. "Point and click" probing of data from a commercial SPICE program uses simple inter-process communication. Includes automation of MSWord for background documentation as the design is developed. Used for designs selling in high volume and has generated more than 20 patented electronic systems.
Martin Mallinson is the Director of IC Design engineering in the ESS Technology design center in Kelowna BC Canada. Since being introduced to Symbolics machines in 1985 Martin has developed most of his innovative electronic designs as LISP programs before implementing them in analog and digital electronic systems. Such LISP developed systems include the engine controllers on Airbus and Boeing aircraft and the ESS HyperStream modulators used on DVD Class D amplifiers. When not designing new electronic devices Martin dedicates himself to re-creating his Symbolics Genera development environment on the PC in Allegro LISP and making it available to a wider community. You may contact him at .
Syntax Analysis in the Climacs Text Editor
Brian Mastenbrook, Motorola Global Software Group (Schaumburg, IL)
audio
The Climacs text editor is a CLIM implementation of a text editor in the Emacs tradition. Climacs was designed to allow for incremental parsing of the buffer contents, so that a sophisticated analysis of the buffer contents can be performed without impacting performance. We describe two different syntax modules: a module for a sequentially-defined syntax of a typical programming language, doing the bulk of its parsing in a per-window function; and an interactive editor for a textual representation of lute tablature, recursively using a per-buffer function for its parsing.
Brian Mastenbrook is a software engineer at Motorola's Global Software Group, where he works on automatic code generation from modeling languages. In his copious free time he participates in a number of open source Common Lisp-related projects, especially Steel Bank Common Lisp. He holds a Bachelor of Science in Computer Science and Mathematics from Roosevelt University.
Sheafhom
Mark McConnell, WANDL Inc. (Warren, NJ)
slides
Sheafhom 2.1 is a Common Lisp package for large-scale mathematical computations. Its front end is a language for problems in algebraic topology and number theory. These problems come down to large sparse systems of linear equations over the integers, or over other number systems where arithmetic gives exact results. Sheafhom's back end solves the sparse systems. We survey and compare algorithms for integer sparse matrices, and present implementation techniques in Lisp for sparse matrices over different number systems.
Mark McConnell received his B.A. from Harvard and his Ph.D. from Brown, both in mathematics. After a postdoctoral job at Harvard, he joined the math department of Oklahoma State University, receiving tenure in 1995. In 1999 he and his family moved to New Jersey. He took up his current position at WANDL, Inc., where he specializes in algorithms for graph layout, discrete optimization and numerical problems. He and his wife have two children aged 17 and 6. He is active in the church as a choir member, composer and layreader.
A Framework for Maintaining the Coherence of a Running Lisp
Drew McDermott, Yale University (New Haven, CN)
audio slides
During Lisp software development, it is normal to revise and reload programs and data structures continually. The result is that the state of the Lisp process can become "incoherent," with updates to "supporting chunks" coming after updates to they chunks they support. The word chunk is used here to mean any entity, content, or entity association, or anything else modelable as up to date or out of date. To maintain coherence requires explicit management of an acyclic network of chunks, which can depend on conjunctions and disjunctions of other chunks; further, the updating of a chunk can require additional chunks. In spite of these complexities, the system presented in this paper is guaranteed to keep the chunk network up to date if each chunk's "deriver" is correct, the deriver being the code that brings that chunk up to date.
Drew McDermott is Professor of Computer Science at Yale University. He was educated at MIT, where he received a Ph.D. in 1976. His research is in planning, knowledge representation, and inter-agent communication, with side excursions into philosophy. He coauthored the book "Artificial Intelligence Programming" in the 1980s with Eugene Charniak, Jim Meehan, and Chris Riesbeck, the first book on advanced Lisp for AI applications. He currently uses Lisp for research into AI planning. He is on the editorial board of Artificial Intelligence, and is a Fellow of the American Association for Artificial Intelligence.
Advice about Debugger Construction
Arthur Nunes-Harwitt
slides audio
A debugger is a tool that allows the programmer to view some aspect of the running program. This paper will present a specification of the debugger commands step and next for the call-by-value lambda-calculus with constants. The operationally-defined CEK-machine will be extended to implement these commands in a way that is proven to be faithful to the specification. This theory will then be used to develop actual debuggers written in Scheme.
Arthur Nunes-Harwitt received his bachelor's degree in Computer Science from Brandeis University. He went on to receive masters' degrees in both Mathematics and Computer Science from the University of Pittsburgh, and is ABD in Computer Science at Northeastern University. He has worked as a software engineer at the Learning, Research and Development Center in Pittsburgh and at The Mathworks, and has taught at the Wentworth Institute of Technology. He is currently a faculty member at SUNY Nassau Community College in the Mathematics and Computer Science department. His interests include the design and implementation of LISP-like languages.
Generating .NET Applications Using Lisp
Alex Peake, Comac, Inc. (Milpitas, CA)
slides audio
Lisp macros are powerful abstraction creators all the way to domain specific language building. Yet Lisp has poor support for business applications - GUI, database support, enterprise infrastructure and libraries. The .NET platform has all the support to build business applications, such as libraries, components and infrastructure, yet has no macros (and other development support that Lisp offers). The ideal would be Lisp.NET, but since it does not exist we use Lisp to create a macro-like facility (code generation) for C#. The result is an order of magnitude improvement in productivity in the domain business applications.
Alex Peake has been Chief Technologist at Comac, Inc., an Iron Mountain Company for the last five years. Prior to that he started several software companies focused on software solutions and services for marketing products and services, including such clients as Disney, Adobe, Allstate, Blue Cross and CompuServe. He has been exploring programming productivity for more than 15 years, and developing generative solutions for more than 10 years. He worked for Agilent (then Hewlett-Packard) for most of his prior career, focused in testing systems for the telecommunications industry. He holds an Honors Degree in Physics from Nottingham University.
Performance Beyond Expectations
Lynn Quam, (Winston, OR)
The performance of Common Lisp based Image Understanding Systems has been significantly improved by the careful choice of declarations, object representations, and method dispatch in a small number of low-level primitives. In matrix multiplication and image pixel access, the performance achieved is within a factor of two of optimized C code. Effective Lisp compiler register allocation, fast CLOS slot access, and fast generic function dispatch are critical. For large grain operations, performance can be further increased using foreign function libraries. The paper closes with a "laundry list" of features that a Common Lisp implementation should provide for improving performance.
Lynn Quam received a BS in Mathematics from Oregon State University in 1966 and a PhD in Computer Science from Stanford University in 1971. From 1966 through 1975 he worked for the Stanford Artificial Intelligence Laboratory, where he contributed to the development of Lisp 1.6 and of image processing tools for the analysis of data from NASA's Mariner 9 and Viking missions. From 1977 through 1999, he was a senior research scientist for the Perception Group of SRI's Artificial Intelligence Center, during which time he developed Lisp based tools and systems for image understanding, including the SRI Cartographic Modeling Environment (CME) and the RADIUS Common Development Environment (RCDE). Lynn retired from SRI in 1999, but continues to work on an open source successor to CME called FREEDIUS.
Integrating Problem-Solving Models in Common Lisp
Nancy Reed, University of Hawaii, (Honolulu, HI)
James Moen, Augsburg College, (Minneapolis, MN)
Nancy E. Reed received her Ph.D. in Computer Science from the University of Minnesota in 1995. Her research interests are primarily in the areas of autonomous agents, knowledge-based systems and bio/medical informatics. Dr. Reed is an assistant professor in the Information and Computer Sciences Department at the University of Hawaii, Manoa. She previously taught and did research at Sonoma State University, the University at California, Davis and Linköping University in Sweden.
James B. Moen received his Ph.D. in Computer Science from the University of Minnesota in 1994. His research interests include the application of formal logic to artificial intelligence, and the design and implementation of programming languages. Dr. Moen is an assistant professor of Computer Science at Augsburg College in Minneapolis, Minnesota.
The (Re)Birth of the Knowledge Operating System
JP Massar, BioLingua (Palo Alto, CA)
Jeff Shrager, Carnegie Institute of Washington, Department of Plant Biology (Stanford University, Stanford, CA)
Michael Travers, Hyperphor (Pacifica, CA)
slides audio
We introduce the concept of a Knowledge Operating System (KnowOS), and describe a working example instantiated in two real implementations: a biologist's knowledge workbench, and a more general knowledge analyst's workbench. The services offered by classical operating systems include persistence of data objects and processes, application integration, multi-user management, and both programming and user interfaces. Whereas classical operating systems provide these services for simple data objects such as files or tables, the KnowOS provides them for networks of complex objects. Add to this an efficient, interactive, high-level language and one has a powerful environment for knowledge programming.
Michael Travers holds a BS in mathematics and a Ph.D from the Media Lab at MIT, where he conducted research in artificial life, programming languages and environments, and agent-based systems. His publications include work on knowledge visualization, computer-supported cooperative work, and programming languages. At IBM's TJ Watson Laboratory he conducted research on Java tools, including the Skij Scheme implementation, and the use of rule-based systems for modeling business processes. For the past five years he has been designing systems for computational chemistry at Afferent Systems, where he was Director of Human-Computer Interaction, and Elsevier MDL Inc, where he is Principal Software Engineer. He is also the Knowledge Representation and User Interface Lead for the BioLingua project.
Jeff Shrager is a research fellow at The Carnegie Institute of Washington, Department of Plant Biology where he studies Cyanobacteria (the most important organisms on earth), a consulting Associate Professor in Symbolic Systems at Stanford where he teaches interaction analysis and symbolic biocomputing (separately), a research scientist at The Inst. for the Study of Learning and Expertise where he works on scientific discovery by humans and computers (jointly!), and a member of the research staff at PARC where he works on collaborative knowledge-based decision-making. He probably has several other appointments that he can't remember where he does something else altogether.
Combinatorial Hypercoding with Macro-Defining Macros
Robert Vogt, Vogt & Partners (Ann Arbor, MI)
slides
This paper outlines a process which starts from a single image processing operation, expressed initially in a very compact but general way. Then, in a more elaborate but optimized form, it is expressed as a first-level macro, which allows one to express similarly-structured, optimized functions that differ only in the data-element operation being executed. Following that, a more complex, second-level or "macro-writing macro" is described, which allows one to create hundreds macros sharing the same basic framework, but differing in terms of specific data-element types, scanning directions, argument structure, and optimization levels being supported. These automatically generated macros can then in turn be used to generate combinatorial libraries of methods that are optimized for specialized operations on specific image data types, again in an entirely automated fashion.
Dr. Vogt's primary interest is in intelligent vision systems. He received his Ph.D in Computer Science. at the University of Michigan, where his thesis, "Automatic Generation of Morphological Set-Recognition Algorithms," was published by Springer in 1989. In it, he used Lisp to automatically discover and write simple image processing algorithms based on a special purpose high-speed machine known as the ERIM Cytocomputer. As part of that thesis research, he spent two years working at Thomson-CGR in Paris, and at the French Ecole des Mines de Fontainebleau (birthplace of "mathematical morphology" a set theoretic framework for shape-based image analysis.) Since then, he has concentrated on developing algorithms for commercial and government applications, using a variety of imaging modalities. These cover a wide range of problem domains including medical, industrial, defense, intelligence, remote sensing, and document exploitation.
A New GOFAI Theory: How Language Works
Wai Yeap, Auckland University of Technology (Auckland, New Zealand)
slides audio
Many early AI "theories" about how the mind works were implemented in LISP and such works brought much excitements to the field of AI research in the 70's and 80's. LISP and AI were inseparable then. Somehow such works rarely appear in AI conferences anymore and LISP went out of fashion. Given this re-birth of a conference on LISP, this paper will discuss a new GOFAI theory on language implemented in LISP. Like the good old days, this paper is part AI, part LISP.
Professor Yeap is the Director of the Institute for Information Technology Research at the Auckland University of Technology, New Zealand. His first love in AI is to understand how large-scale space is perceived. He developed a computational theory of cognitive mapping and implemented in LISP (see AI, v. 34, 1988; AI, v. 107, 1999). His second, started 10 years ago, is to understand how language works. He is currently developing a computational theory of language, and of course implemented in LISP. He keeps LISP alive in New Zealand, being the only group actively using LISP.
A Collaborative Framework for Managing Uncertainty and Cognitive Bias
Eric Yeh, SRI International (Menlo Park, CA)
Cognitive bias is a common problem encountered in analysis and decision-making. Humans, often unknowingly, have a tendency to skew assessments and decisions based on background, experiences, and prejudices. Angler is a collaborative framework that employs divergent and convergent problem solving strategies to help mine out and overcome these cognitive biases. Tools such as brainstorming, clustering, and voting, are used to help a set of diverse professionals complete a knowledge task. Angler is a web-based application, using a combination of AllegroServe and Active Lisp Pages, to create an asynchronous collaborative environment that can be deployed with a minimum of software installation.
Eric Yeh is a software engineer at the Artificial Intelligence Center at SRI, where he is a member of the Reasoning and Representation Group. His interests include issues evidential reasoning, knowledge-discovery and reasoning over semi and unstructured data, and reasoning and action under uncertainty. Eric has a B.A. in Computer Science from the University of California at Berkeley.
Rule-Based Automatic Simplification of Trigonometric Expressions
Hongguang Fu, Chengdu Institute of Computer Applications, Chinese Academy of Sciences (Chengdu, China)
Xiuqin Zhong, Chengdu Institute of Computer Applications, Chinese Academy of Sciences (Chengdu, China)
Zhenbing Zeng, East China Normal University (Shanghai, China)
Dr. FU Hongguang is a professor at the Chengdu Institute of Computer Applications, Chinese Academy of Sciences. His main research areas focus on solving polynomial equations by the Dixon resultant method and geometry invariants, developing a new computer algebra system, and investigating the applications of such systems to robotics, to computational molecular biology and to mathematics education. Since 2000, he has led a Common Lisp user group in China and developed a series of intelligent software packages in Allegro Common Lisp for use in mathematics education.
ZHONG Xiuqin is a full-time doctoral student in applied mathematics at Chengdu Institute of Computer Applications, Chinese Academy of Sciences. She has worked with Common Lisp since 2000. She is interested in computer algebra systems and machine proofs of geometry theorems.
Dr. ZENG Zhenbing is a professor at East China Normal University in Shanghai. His main research area focuses on distance geometry and machine proofs of inequalities.
© alu 2005