[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
XML Daily Newslink. Monday, 16 October 2006
- From: Robin Cover <robin@oasis-open.org>
- To: XML Daily Newslink <xml-dailynews@lists.xml.org>
- Date: Mon, 16 Oct 2006 14:02:12 -0400 (EDT)
XML Daily Newslink. Monday, 16 October 2006
A Cover Pages Publication http://xml.coverpages.org/
Provided by OASIS http://www.oasis-open.org
Edited by Robin Cover
====================================================
This issue of XML.org Daily Newslink is sponsored
by BEA Systems, Inc. http://www.bea.com
====================================================
HEADLINES:
* DLF-Aquifer Asset Actions Experiment: The Value of Actionable URLs
* An Interoperable Fabric for Scholarly Value Chains
* Understanding the Service Lifecycle within a SOA: Design Time
* W3C Working Draft for Remote Events for XML (REX)
* Let the Browser Wars Begin
* JBoss Awakens Hibernate with Persistence API
* Integrate GridFTP and Grid Protocols into Firefox/Mozilla-based Tools
----------------------------------------------------------------------
DLF-Aquifer Asset Actions Experiment: The Value of Actionable URLs
Robert Chavez, Timothy W. Cole, et al. (eds), D-Lib Magazine
Metadata records harvested using the Open Archives Initiative Protocol
for Metadata Harvesting (OAI-PMH) are often characterized by scarce,
inconsistent and ambiguous resource URLs. There is a growing recognition
among OAI service providers that this can create access problems and
can limit range of services offered. This article reports on an
experiment carried out by the Digital Library Federation (DLF) Aquifer
Technology/Architecture Working Group to demonstrate the utility of
harvestable metadata records that include multiple typed actionable
URLs ("asset actions"). Four data providers, one tool provider, and
one OAI service provider participated in the experiment -- Indiana
University, Northwestern University, the Chicago Historical Society,
Tufts University, the University of Virginia (UVa), and the University
of Illinois at Urbana-Champaign (UIUC). The genesis of the experiment,
a brief description of experiment objectives and XML schemas used, and
descriptions of data provider, tool, and service provider implementations
are outlined below. The experimental portal that was built remains
publicly accessible. One of the major goals of the Digital Library
Federation (DLF) Aquifer project is to enable "deep sharing" of digital
library content across institutional and technological boundaries.
Members recognized that this would require the development of
standardized low-barrier-to-entry interoperability mechanisms, allowing
digital content providers to expose the components and views of their
digital objects to a variety of tools that scholars might be using for
collecting, annotating, editing, and otherwise repurposing digital
content. Implementing asset actions for OAI-PMH required expressing
packages of actionable URLs in XML, which could be validated against
a schema written in W3C XML Schema Language. For the purposes of this
experiment, descriptive metadata and asset actions were harvested
together. To allow harvest of asset actions in combination with
descriptive metadata expressed in either simple DC or the Metadata
Object Description Schema (MODS), two additional schemas were required.
http://www.dlib.org/dlib/october06/cole/10cole.html
See also on the Aquifer project: http://www.dlib.org/dlib/may06/kott/05kott.html
----------------------------------------------------------------------
An Interoperable Fabric for Scholarly Value Chains
Herbert Van de Sompel, Carl Lagoze et al. (eds), D-Lib Magazine
This article describes an interoperability fabric among a wide variety
of heterogeneous repositories holding managed collections of scholarly
digital objects. These digital objects are considered units of
scholarly communication, and scholarly communication is seen as a
global, cross-repository workflow. The proposed interoperability fabric
includes a shared data model to represent digital objects, a common
format to serialize those objects into network-transportable surrogates,
three core repository interfaces that support surrogates (obtain,
harvest, put) and some shared infrastructure. This article also
describes an experiment implementing an overlay journal in which this
interoperability fabric was tested across four different repository
architectures (aDORe, arXiv, DSpace, Fedora). Our work exploits the
expanding number and variety of heterogeneous repositories that hold
managed collections of digital objects. We propose that the digital
objects from these repositories can function as the units of scholarly
communication in cross-repository workflows, and can also provide the
raw materials for the creation of a variety of cross-repository
services. In accordance with the rapidly emerging scholarly reality,
we consider these digital objects to be compound in nature. That is,
they are aggregations of datastreams with both a variety of media
types and a variety of intellectual content types including papers,
datasets, simulations, software, dynamic knowledge representations,
machine readable chemical structures, etc. The Pathways Core model
(see the OWL schema, encoded in XML) uses nested entities to represent
recursive "digital objects within digital objects", it allows the
association of multiple properties with entities, and uses the
hasDatastream property to provide access to entities' constituent
datastreams.
http://www.dlib.org/dlib/october06/vandesompel/10vandesompel.html
----------------------------------------------------------------------
Understanding the Service Lifecycle within a SOA: Design Time
Quinton Wall
Service Oriented Architecture (SOA) presents an architecture approach
that relies on decomposing business processes and lower level activities
into standards-based services. These services may be fine grained,
course grained, presentation-centric, data-centric, or any number of
other permutations. The ability to effectively manage the lifecycle of
services is fundamental to achieving success within a SOA initiative.
These discussions shall be divided into two articles focusing on design-
time and run-time aspects of the lifecycle, respectively. This first
article covers the design-time phases in the service lifecycle. By
further understanding design time needs with regard to shared service
tablishing fundamentals early, such as methodology, categorization
guidelines, and development tools, is crucial to early and continued
success. By beginning to break the traditional application development
paradigms and focus on business processes as the blueprint for moving
forward, service engineering teams can provide closer alignment to
business needs in a timely and efficient manner. The second part of
this article will focus on the run-time aspects of the shared service
lifecycle.
http://dev2dev.bea.com/pub/a/2006/08/soa-service-lifecycle-design.html
----------------------------------------------------------------------
W3C Working Draft for Remote Events for XML (REX)
Robin Berjon (ed), W3C Technical Report
W3C has released an updated version of "Remote Events for XML (REX)
1.0." A joint effort of the W3C SVG and Web API Working Groups, the
REX Task Force has released the updated draft with usage examples.
The Remote Events for XML (REX) specification defines a transport
agnostic XML syntax for the transmission of DOM events as specified
in the DOM 3 Events specification in such a way as to be compatible
with streaming protocols. REX assumes that the transport provides for
reliable, timely and in sequence delivery of REX messages. REX does
not cover the process of session initiation and termination which are
presumed to be handled by other means. The first version of the
specification deliberately restricts itself to the transmission of
mutation events (events which notify of changes to the structure or
content of the document) so as to remain limited in scope and allow
for progressive enhancements to implementations over time rather than
require a large specification to be deployed at once. The framework
specified here is however compatible with the transmission of any other
event type, and great care has been taken to ensure its extensibility
and evolvability.
http://www.w3.org/TR/2006/WD-rex-20061013/
See also the W3C news item: http://www.w3.org/News/2006#item189
----------------------------------------------------------------------
Let the Browser Wars Begin
Steven J. Vaughan-Nichols, DesktopLinux.com
Firefox 2.0 is almost here, and Microsoft is expected to start pushing
out Internet Explorer 7 to users via the Windows Automatic Update
software-distribution mechanism by year's end. In short, the browser
wars are about to begin again. Depending on whose numbers you believe,
Firefox has been continuing to erode IE's (Internet Explorer's) lead.
According to Janco Associates, Internet Explorer has continued to lose
market share in 2006. It bottomed out to 75.88 percent share in July,
which was down from 77.01 percent in January, and from 84.05 in July of
2005. OneStat.com, meanwhile, reported earlier this week that the global
usage share of IE has grown to 85.85 percent. That's a jump of 2.8
percent since July, by their counting. Firefox, on the other hand, is
at 11.49 percent, a decrease of 1.44 percent since the web analytics
specialist reported its July data. The rest of IE's gain came at the
expense of Opera and the other browsers. As for Linux and browsers,
DesktopLinux's recent survey of Linux users found that Mozilla's
Firefox browser dominates the field. Firefox came in with 58.2 percent
usage, followed by Konqueror at 16.3 percent, and Opera at 12 percent.
Of all the other browsers, only Mozilla, at 4.7 percent and Epiphany,
GNOME's default browser, at 2.7 percent, grabbed more than 2 percent
of the users. With new browser versions coming out from both Mozilla
and Microsoft in the coming weeks, however, we can expect to see
dramatic changes in the overall browser market.
http://www.desktoplinux.com/news/NS4598252412.html
----------------------------------------------------------------------
JBoss Awakens Hibernate with Persistence API
Paul Krill, InfoWorld
With the release of Version 3.2, Hibernate is touting its certified
support for the JPA (Java Persistence API) introduced in Java EE
(Enterprise Edition) 5. This API is featured as a way to simplify
development of Java EE applications that use data persistence.
Hibernate now can be used as a portable Java Persistence provider
for any Java EE 5 application server. With Version 3.2, JBoss has
simplified Hibernate packages to support popular development
frameworks. Developers have a persistence offering to work with
native Hibernate, Java Developer Kit 5.0 annotations, the Java
Persistence API, or EJB (Enterprise JavaBeans) 3.0. Another new
feature in Version 3.2 is customizable context management for Java
environments. Also, an optimistic locking function, for record-locking,
can lock in a cluster with the new JBoss Cache provider. Declarative
data filers are featured for transparent definition of dynamic data
views. Enhanced query options and query language are included in
Version 3.2 as well. Also offered as part of the Hibernate 3.2
release are modular bundles, including Hibernate Core, which is a
high-performance query service for object-relational mapping usage.
It features a data management and query API and object-relational
mapping with XML metadata. Hibernate Annotations in Version 3.2
include several packages of JDK (Java Development Kit) 5.0 code
annotations for mapping classes as a replacement or in addition to
XML metadata. The Hibernate EntityManager in Version 3.2 implements
Java Persistence programming interfaces, object lifecycle rules, and
query options as defined by Java Specification Request 220.
http://www.infoworld.com/article/06/10/16/HNhibernate32_1.html
----------------------------------------------------------------------
Integrate GridFTP and Grid Protocols into Firefox/Mozilla-based Tools
Karan Bhatia, Michela Taufer, et al., IBM developerWorks
The GridFTP protocol is an extension to the standard File Transfer
Protocol (FTP) with support for security based on the Globus Grid
Security Infrastructure (GSI), high-performance data transfer using
striping and parallel streams, and support for third-party transfer
across different GridFTP servers. GridFTP is a standard component of
the Globus Toolkit and includes the server component and a set of client
applications. Access to the GridFTP server requires user authentication
using GSI, followed by the use of a client application, such as the
command-line application UberFTP. Because of this, GridFTP users must
install and configure the Globus Toolkit software on their client
machines -- a high burden, given the complexity of the software. In
contrast, standard FTP is directly built into most browsers, allowing
users to simply type an FTP URL in the address bar of the browser and
browse, upload, and download their files. In this article, we show how
to integrate the GridFTP protocol into the Firefox browser in order to
enable the same behavior as standard FTP. The user simply supplies a
gsiftp URL, then can browse, upload, and download files from the server.
User authentication is provided by the Grid Account Management
Architecture (GAMA) system. This extension, called Topaz, is available
in binary or source formats.
http://www-128.ibm.com/developerworks/opensource/library/gr-firefoxftp/
----------------------------------------------------------------------
BEA Systems, Inc. http://www.bea.com
IBM Corporation http://www.ibm.com
Innodata Isogen http://www.innodata-isogen.com
SAP AG http://www.sap.com
Sun Microsystems, Inc. http://sun.com
----------------------------------------------------------------------
Newsletter subscribe: xml-dailynews-subscribe@lists.xml.org
Newsletter unsubscribe: xml-dailynews-unsubscribe@lists.xml.org
Newsletter help: xml-dailynews-help@lists.xml.org
Cover Pages: http://xml.coverpages.org/
----------------------------------------------------------------------
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]