General purpose computers have embraced the multi-cores and multi-CPU architectures and are heading toward many-cores ones. Though well adapted for classic processing tasks those architectures do not address the requirements of the ultra and extreme low latency applications for which there is a tremendous need for different computing paradigms. One of them is the use of reconfigurable hardware computing which uses programmable gate arrays (FPGA) to accelerate specific kinds of computations. But even though the performance gain can be of several orders of magnitude compared to a classic CPU, the fact that FPGAs are notoriously very difficult to program has been one of the major factors preventing the mainstream adoption of this technology. There have been several attempts to try and solve this problem mostly trying to develop compilers generating hardware description in VHDL/Verilog for general purpose languages like C. Though this can be useful at times, our opinion is that it is not the most efficient approach. In this talk we will present our solution based on domain specific languages compilers and tools written in Common Lisp and we will show how they enable us to very effectively and easily generate the high performance Domain Specific Hardware Cores optimized for the processing requirements of extreme low latency applications.
Marc Battyani is the CTO and founder of NovaSparks, a global provider of end-to-end extreme low-latency computing solutions.
A specialist in hard real time high performance computing and networking with FPGAs as well as algorithmic code generation and optimization Marc Battyani brings 25 years of experience in the design of electronics systems and their associated domain specific compilers and cores leveraging the computing power of FPGAs and enabling their uses outside of their traditional markets.
Prior to founding NovaSparks, Marc was the founder of Fractal Concept a contract research company developing products like CT scanners, industrial ultrasound echographic systems or various smart sensors for the industrial, medical and military markets. The common point of those various projects is the use of compilers and other high level tools written in Common Lisp to enable the use of specific hardware to perform some tasks in unique and optimal ways.
Marc Battyani holds an MSc in electronics from Supelec (France) and another one in Computer Science from Paris XI (France) / UDM (Canada). He is also a known Common Lisp expert and has released several open source projects for this programming language.
This talk will mainly present the Hop language and recent developments around it.
Most of his research career was devoted to the study of applicative languages (Lisp then Scheme), their semantics and implementation. He is the author of the well-known "Lisp in Small Pieces" book precisely on this topic.
As everyone who loves Lisp understands, Lisp is a language of freedom: syntax-free, m memory-management-free, free extension, free modification, free way of thinking, and so on.
He will try to confirm this marvelous freedom, with a number of illustrations based on his Lispful history, including topics on language design, implementation, Lisp applications, and even puzzles.
Ikuo Takeuchi received BS and MS degree in mathematics in 1969 and 1971, and PhD in engineering in 1996, all from the University of Tokyo.
Since 1971, he had been working for NTT Laboratories, mainly in the Basic Research Division, and was engaged in the research on computer programming languages, especially on Lisp languages and its implementation. In 1986, a Lisp machine system TAO/ELIS whose language design and implementation were lead by him was on the market. Then he developed another Lisp machine system TAO/SILENT.
In 1997, he became a professor of the University of Electro-Communications, moved to the University of Tokyo in 2005, and moved to Waseda University in 2011. He is an Emeritus Professor of the University of Tokyo.
He has also been working for IPA (Information Processing Promotion Agency) as a project manager of the Exploratory IT Human Resources (Mitoh) Project more than ten years from its origination.
In this talk, the speaker's personal view to Lisp will be given to explain why he writes programs mostly in Lisp. Then typical Lisp systems and machines developed in Japan will be introduced. Finally, an animation of visible garbage collection will be displayed.
Eiiti Wada graduated from the Department of Physics of Tokyo University in 1955, and joined to Professor Hidetosi Takahasi's Laboratory, where he prepared a number of system programs for the Parametron Computer PC-1 from 1958 to 1964.
From 1964 to 1977, Associate Professor of the Department of Mathematical Engineering of Tokyo University.
From 1973 to 1974, Project Mac of MIT.
From 1977 to 1992, Professor of the Department of Mathematical Engineering of Tokyo University.
From 1992 to 2002, Executive Advisor of Fujitsu Laboratories.
From 2011 to prosent, Research Laboratory of Internet Initiative Japan.
This paper provides information about an open-source application server written in Common-Lisp. In the first chapter, a brief information about underlying technology, specications are given. In the consequent chapters, subsystems that form up the server are summarized. Moreover, a chapter dedicated to the first application built using the server, namely Coretal, is provided. Finally, last chapter summarizes common and different aspects within the current technology standpoint.
Criminal data comes in a variety of formats, mandated by state, federal, and international standards. Specifying the data in a unified fashion is necessary for any system that intends to integrate with state, federal, and international law enforcement agencies. However, the contents, format, and structure of the data is highly inconsistent across jurisdictions, and each datum requires different ways of being printed, transmitted, and displayed. The goal was to design a system that is unified in its approach to specify data, and is amenable to future ``unknown unknowns''. We have developed a domain-specific language in Common Lisp which allows the specification of complex data with evolving formats and structure, and is inter-operable with the Common Lisp language. The resultant system has enabled the easy handling of complex evolving information in the general criminal data environment and has made it possible to manage and extend the system in a high-paced market. The language has allowed the principal product to enjoy success with over 50 users throughout the United States.
This paper describes BABAR, a knowledge extraction and representation system, completely implemented in CLOS, that is primarily geared towards organizing and reasoning about knowledge extracted from the Wikipedia Website. The system combines natural language processing techniques, knowledge representation paradigms and machine learning algorithms. BABAR is currently an ongoing independent research project, that when sufficiently mature, may provide various commercial opportunities.
BABAR uses natural language processing to parse both page name and page contents. It automatically generates Wikipedia topic taxonomies thus providing a model for organizing the approximately 4,000,000 existing Wikipedia pages. It uses similarity metrics to establish concept relevancy and clustering algorithms to group topics based on semantic relevancy. Novel algorithms are presented that combine approaches from the areas of machine learning and recommender systems. The system also generates a knowledge hypergraph which will ultimately be used in conjunction with an automated reasoner to answer questions about particular topics. This paper describes the CLOS implementation of the various subcomponents of BABAR. These include a recursive descent parser, a hypergraph component, a number of new clustering and classification approaches and Horn clause theorem prover. Finally this papers suggests how such a system can be used to implement a new generation of browsers called knowledge browsers.
This paper focuses on a special relationship in Common Lisp between cl:standard-object and cl:standard-class, which allows us to perform metaclass programming. Whereas the self-referential characteristics of the membership-loop on classinstance relationship might threaten ones such as a vicious circle, truly CLOS MOP does not involve Russell paradox that appeared in naive set theories. The history of set theories is a history on paradoxes contained in sets. Russell paradox was solved in two ways. One is the axiom of separation by Zermelo and the other is ramified type theory by Whitehead and Russell. We can capture metaclasses in CLOS as higher order types in ramified type theory. However, up to now, there is no discussion on paradoxical metaclassing from the viewpoint of set theories in the lisp community. This paper is an attempt to make it clear from the viewpoint of set theories to understand how the infinite reflective tower is enabled in CLOS MOP without involving paradoxes.
With increasing processor-memory performance gap, improving cache locality is as important as improving virtual memory locality. In many applications, especially in search algorithms on large pointer-based data structures, breadth-first copying algorithms increase cache misses, TLB misses, and page faults. To improve locality at both the cache and virtual memory levels of the memory hierarchy, ``hierarchical clustering,'' which groups data objects at multiple hierarchical levels, was proposed. In this study, we implemented hierarchical clustering in a commercial Common Lisp system; we considered various implementation issues, particularly in generational GC. Our garbage collector automatically improves data locality at multiple levels; it also allows us to employ a simple tuning method for further improvement. Evaluations with two microbenchmark programs, an XML application, and a tree-based ray-tracing application show that hierarchical clustering provides good overall performance; more precisely, it serves as an insurance that covers miss-induced performance degradation.
Clack is a web application environment for Common Lisp to make your web applications be portable and reusable by abstracting HTTP into a simple API.
In this paper, I describe what are problems in web development and how Clack solves them.
LIL, the Lisp Interface Library, is a data structure library based on Interface-Passing Style. This programming style was designed to allow for parametric polymorphism (abstracting over types, classes, functions, data) as well as ad hoc polymorphism (incremental development with inheritance and mixins). It consists in isolating algorithmic information into first-class interfaces, explicitly passed around as arguments dispatched upon by generic functions. As compared to traditional objects, these interfaces typically lack identity and state, while they manipulate data structures without intrinsic behavior. This style makes it just as easy to use pure functional persistent data structures without identity or state as to use stateful imperative ephemeral data structures. Judicious Lisp macros allow developers to avoid boilerplate and to abstract away interface objects to expose classic-looking Lisp APIs. Using only a very simple linear type system to model the side-effects of methods, it is even possible to transform pure interfaces into stateful interfaces or the other way around, or to transform a stateful interface into a traditional object-oriented API.
This paper describes the author name disambiguation system developed by MSI using Common Lisp. The name disambiguation system is a record linkage system of authors of academic and research papers. The purpose of the system is to display papers written by the same person on a Web site. However, there are a large number of persons with the same name. Therefore, it is necessary to see not only a name but affiliation, coauthor, journal title, paper title, etc., and to judge synthetically. Since there are so many numbers of data record, it cannot be done by viewing one by one whether it is a record which expresses the same person. So, in the system, similarity search method is first applied. Then a machine learning algorithm SVM (Support Vector Machine) is used in discrimination analysis. MSI has developed the CLML (Common Lisp Machine Learning) that is machine learning package written in Common Lisp, and SVM algorithm used by the system is bundled in the CLML. The number of the author records currently treated is about 40 million. Therefore, parallelization of the processing is indispensable. Parallelization is performed at two steps. It distributes in two or more machines and Multi-core processing (fork-future) is carried out by each machine.
We present the object-centered functional programming language Ralph, which is related to Lisp. Ralph supports an extended subset of Dylan’s features (the intermediate Dylan standard with a prefix syntax) and compiles to JavaScript. We focus in this paper on the compilation approach used by the Ralph compiler. The Ralph compiler produces more efficient JavaScript code than similar compilers translating Lisp-like languages due to some trade-offs and utilization of an intermediate representation based on the Administrative Normal Form. Ralphs mechanics and interactions with other compilation passes and aspects of the language are discussed.
For the effective use of resources in large-scale parallel computing environments such as supercomputers, we often use job level parallelization, that is, plenty of sequential/parallel runs of a single program with different parameters. For describing such parallel processing easily, we developed a scripting system named Xcrypt, based on Perl. Using Xcrypt, even computational scientists who are not familiar with script languages can perform typical job level parallel computations such as parameter sweeps by using a simple declarative description. In this paper we will propose a Common Lisp version of Xcrypt. It enables us to write job level parallel executions with various powerful features of Common Lisp including higher-order functions, macros, and REPL. In addition, we implemented RPC between Lisp and Perl that supports callback functions and references to remote objects to realize this system. This paper presents features, implementation, and practical examples of the Lisp-based Xcrypt.
Data Flow Visual Programming Language (DFVPL) is a programming language based on the data flow computing model. Syntactic structure of the model is very simple. Indeed, it describes any computation as a directed graph consisting of nodes as computing units and links as their data flow connections. Thus, DFVPLs should provide features to display and edit directed graphs visually. That is why the languages are categorized as visual programming languages.
In DFVPL, concurrencies included in a given program are naturally and automatically exposed by just writing the program in data flow graph. This is one of the most important property of DFVPL. On the other hand, it has a serious flaw that translating large scale and complex real world program into a data flow graph is sometimes very hard or even impossible.
Plumber is a DFVPL written in Lisp to investigate what is needed to make DFVPL real programming language. A node (computing unit) of Plumber can be any function as in usual DFVPL. In addition, we introduced a special notation to make currying version of the function. This feature introduced higher-order functions into DFVPL naturally.
Our notation for higher-order functions are very simple yet powerful enough. To demonstrate the usability and naturallity of the notation, we demonstrate how the notation can help find a notion of monads which is now become a standard programming technique to structure imperative features in a purely functional way.
A class-based object-oriented programming framework based on CLOS, named CLOS/Class-Based (CLOS/CB) is proposed. It is a framework for object-oriented programming in CLOS, but in the class-based style as in Java or C#. Programming in CLOS/CB should be class-based in the sense that all methods belong to some class. Another restriction in CLOS/CB is allowing only single inheritance, which substantially restricts the usual CLOS programming based on multiple inheritance. Although several popular languages and systems, such as C++ and CLOS, support multiple inheritance, it has been recognized that multiple-inheritance can be problematic in certain circumstances. Although CLOS/CB is similar to Java or C# in its inheritance mechanism and class-based style, CLOS/CB uses the method selection mechanism of CLOS as it is. We present an implementation of this framework on the top of CLOS, and several applications of this framework are demonstrated.
We implement Scheme compiler Scm2Cpp. Scm2Cpp translates Scheme code into human readable and portable C++ code. The translated C++ code has comparable speed to C code which Stalin generates, and much faster than Gambit-C. Applying OpenMP to the generated C++ code outperforms Stalin. We also show Scm2Cpp is faster than other well known Scheme compilers on Mac OS X.
The efficiency of generating probability random numbers (e.g. white noise, Beta randoms, etc.) is important for applications using Markov chain Monte Carlo (MCMC) method. But most of existing implementations of probability random number generators are often using classic, naive and slow method. So fast and convenient libraries of probability random number generator are required.
One big difficulty is that the fastest method of generation is depends on its arguments, especially parameters called "shape parameter". Because of this characteristic, the library have to switch the methods by shape parameters, but sometimes this overheads becomes terrible. On the other hand, users of the library want to use generator straightforward without the knowledge of shape parameters and several methods. So there is a trade off of design and optimization.
In this paper, I relaxed the trade off by using Common Lisp compiler macros, and implemented a library of probability random number generators. I compared my implementation and several major implementation (by C, R, Python, etc.), and show my implementation is faster than others, especially in the case which the overhead of parameter switching is big.
Common Lisp programmers build large systems and compose libraries thanks to ASDF. We will describe recent improvements to ASDF and how they enable programming idioms that were not possible before: package-renaming, list-of, extreme makeover syntax edition. We will also quickly talk about the status of XCVB.
Customer Support or Relation Management (CRM) used to be a post-sale after-thought, an obligation stemming from product sales and a cost of doing business. No longer! In today’s abundance of product/service choices, CRM must be an integral part of business to maintain brand loyalty, which is becoming critical for any business to survive. What if you can anticipate what you can do for your customers before they know it, predict what your customers may like or dislike and take steps to address potential problems before your customers switch vendors, or target individual marketing campaign to very specific and appreciative group of customers instead of spamming. How would this improve a business bottom-line and change its operation? This paper discusses an Intelligent Decision Automation platform for such a CRM system of tomorrow. It was built by Amdocs and Franz using semantic technology database, AllegroGraph, developed in Lisp, machine learning and scalable java middleware. The Semantic platform consists of a number of elements: an Event Collector, a Decision Engine, the AllegroGraph triple store, a Bayesian Belief Network and a Rule Workbench. Combined, this pipeline of technology implements an event-condition-action framework to drive business process in real time.
The proliferation and accessability of the Internet have made it simple to view, download, and publish source code. This paper gives a short tutorial on how to create a new Common Lisp project and publish it.
The OS facilities in the Common Lisp standard are outmoded, having been designed to resemble the native API of Genera and VMS, incomplete when compared to today's popular operating system and lacking in support for such basic IPC facilities as sockets & pipes and executing external programs.
IOLib adds:
The author will introduce IOlib, compare its features to those of the Common Lisp standard, and explain how the difference enables the development of safe and high-performance programs in Common Lisp.
Date: 2012-10-02 16:36:31 JST
Generated by Org version 7.8.11 with Emacs version 24
© alu 2012