Recently in Web services Category

WS-Policy WG

| No TrackBacks

I've had the privilege of working with the WS-Policy Working Group (WG) at W3C over the last few months. I know, it may seem hard to believe that working on a standards body working group is a privilege, and often it does seem like a chore, but there are several reasons for why I feel this way.

Firstly, I'm gaining new experience; experience of standards body processes. It's always a privilege to learn something new. And secondly, these are a smart bunch of people. At times some of the debate seems trivial but very smart people are putting their minds together in order develop some standards that will make Web services more interoperable with more advanced and rich features in the future.

Companies like Microsoft, BEA, Sun Microsystems, IONA, SAP, Sonic Software, Nokia, IBM, Nortel, Adobe, webMethods, etc. invest lots of resources to these standards bodies (WS-Policy Participants) Some of the people are in several working groups and basically have built a career just working on standards,. And it is certainly not a cushy number. These people work hard on some very tedious material! It can do you head in!

I am an infant in this world. Though I have lots of enterprise computing experience and interoperability experience I feel like a complete novice. I'm fortunate to have landed with a very civil bunch who are gracious at bringing me up to speed.

Now there are many times that this sort of working group activity will do my head in. Bickering over the semantics of a word or the usage of a word or the absence of a word is not how I'd like to spend my day. But I've come to appreciate what can happen when ideas and standards are ambiguous. Chaos can ensue and perfectly good initiatives can die.

I'm hoping to pull post an article giving an overview of WS-Policy. Stay posted.

This week the WG had a face-to-face in Bellevue, Washington. I finally got to meet the people I've been talking to on conference calls every week for the last few months. We got to find out a little more about each other - not just our views on WS-Policy. Bellevue/Seattle was beautiful when I arrived but turned ugly from Wednesday. It was wet like Ireland. We did have a wonderful meal at the Seastar restaurant. I'd recommend it.

Disappearing Web Services

| No TrackBacks

One of the problems with WWW based Web services is that they're not always very reliable. It's all very well talking about the multitude of Web services that are going to be on the Web but when you actually go looking using a tool like Google you find that there isn't as many as you'd think, they're is little indication that they are current, and there is no guarantee they will be there tomorrow.

I went looking for the simple stock quote Web service and found only a handful. I found one that worked well on the xmethods.net Web site. I found the example in the Ruby Cookbook (page 630). I'm not sure if it was due to my posting my Ruby based Web service client on Tuesday, and it causing some sort of spike in usage, but since yesterday this service no longer exists! It's not even listed anymore. So those of you that were thinking "this Ruby program doesn't work!" are right! It no longer works. I will be fixing that.

When you do find a Web service using the Google approach to finding them, they might be dated 2003, or something like that. Which is fine - if a service written in 2003 worked and still works fine then there is no need to change it. However as a user you'd still like to know that it is current and won't be gone in like ... a DAY!

So then I found a WSDL for a simple Stock quote Web service on the IBM site. I had to change my Ruby code to use the WSDL instead of the API that uses the URL and namespace. But as it turns out this service no longer exists either!

So I'll hunt down another stock quote Web service with a similar schema and post the updated Ruby file to the end of this post later. There are several out there, it's just a matter of finding whcih one is still there. I might use Yahoo.

Ruby, Atom, and Web services

| No TrackBacks

I started working on a project over the weekend to pull together a demonstration of using Ruby, and RSS or Atom and Web services. I also want to add Artix and it's Data Services into the mix later. The final result will be a mashup but I want to start publishing some of the work as I go along.

I hadn't worked with Ruby before. I had alrady started learning PHP but was encouraged by Steve Vinoski to start looking at Ruby. Jim Watson and Greg Lomow, whom I had worked with developing a language back in 1997, finally convinced me. So I bought two books: Ruby Cookbook and Programming Ruby The Pragmatic Programmers' guide. I then loaded up a bunch of bookmarks in my browser of Ruby sites and off I went.

Then it was off to work on some code.

First I built a couple of classes to write an atom based feed - an atom.xml file. These two classes handle writing out the feed header information and then each of the entries.


I see that Barry has been busy over at Haute Techno with a new Mashup using CeltiXfire (aka Celtix) and a number of other technologies including Google Maps. (Google Maps must be the most widely used set of web service APIs for mashups.)

You can download the demo from the site which includes documentation. Barry says that it was the most fun he's had in a long time and that he'll be evolving the mashup.

I'm looking forward to seeing more CeltiXfire and Artix based mashups soon. I believe Steve Vinoski has been working on some too.

Recently I have been involved in some projects that introduce SOA as a means to reduce testing efforts in large organizations. Taking two or three weeks out of the testing phase of a project and lowering the overall software development lifecycle (SDLC) can save millions of dollars for large organizations.

IONA's Professional Services organization has been researching and implementing methods and practices that can leverage the benefits of SOA for integration testing. Many of IONA's larger customers have a plethora of middleware and platforms. Having a consistent and automated approach to testing across these various technologies has been very difficult until now.

Leveraging best practices, testing tools and the unique capabilities of Artix, IONA PS has developed Certification Kits that allow for independent testing of integration end-points by disparate or remote groups. And this can be achieved no matter what the underlying middleware: CORBA, MQ Series, Tuxedo, Tibco, J2EE, Web services, CICS etc. This is achieved by harnessing the unique capabilities of Artix as an Extensible ESB. Artix employees WSDL as the common underlying interface definition or contract between the endpoints. WSDL is employed no matter what the underlying technology.

At one customer, IONA PS were able to take three weeks out of the testing phase. Consider how many lifecycles each application or team has per year and how many teams there are. You can then imagine how the savings add up. And then consider the time to market advantages.

I was a skeptic myself until recently. I saw an impressive demonstration of the capabilities performed by my old buddy Ashwin Karpe. Really great stuff!

I wrote a blog entry back in January about using RSS as a cheap way to do a services registry. I'm told by one source that as a result several hundred downloads of Celtix were generated in the first week of that posting. Since then this posting is still the most read entry on my blog. The second most well read posting on IPBabble.com is my posting on JBI.

I've received a lot of positive feedback on this article. Many asked me why I didn't come out stronger against UDDI. I've noted since then that there seems to be a lot of activity around using RSS for a registry. I think it can be used as is today for small numbers of services but some enhancements are required in order for it to scale. More specifically: standardization, tooling, and how to allow it to scale (federation).

But it does raise the questions: Why is UDDI not just killing off this sort of ad hoc approach? Where is UDDI in terms of adoption? Are people buying it? One would think that it would dominate the market around services repository but it hasn't. It may dominate the commercial repositary space but there are a lot of ad hoc approaches going on in SOA implementations.

Some industry leaders I've spoken too simply don't like UDDI. "It's too hierarchical", "It's too much like a CORBA Naming Service", "I need something that's more flexible in terms of lookup". For many, as I mentioned in that earlier posting, it's just too big for the few services that they have started with. Others seem to be aligning with my view of using technologies like RSS to provide a cheap and easy to use registry. (And if the hits on my original post are anything to go by we should see much more.)

In my original post I tried to stay neutral on UDDI saying:

"I am not trying to replace UDDI. UDDI is the right and standard approach for discovering Web services. In fact I think that my idea can compliment UDDI ..."

Though I still maintain that the RSS approach can compliment UDDI, I'm leaning more towards the "who needs it" (UDDI) approach. This is based on the response I've received. And I'll need stronger convincing that UDDI is necessary. Perhaps UDDI needs an overhaul?

I've been arRESTed!

| No TrackBacks

Yes apparently my babble is causing unREST.

Mark Baker thinks I speak rubbish in my Poor Service Semantics post.

But Mark is just shifting around where the semantics of what your doing is. That's all. All this RESTing has him confused. ;-)

Look when you get down to it everything (in this service world) is just a string coming over the wire. Web services, REST, CORBA etc. are just arguing where the unwrapping of the string occurs.

Take CORBA for example. When you look down into it there is basically one operation called a Request that puts a big blob of data (a document?) on the wire. At some point the data gets unwrapped and examined and a dispatch to piece of coded logic (the business) occurs. So is CORBA RESTful?? Is the Request just really a PUT in disguise?? ;-)

Either you have one location and you put and the semantics of what your doing (the operation) in the document - e.g. orderItem. Or there is a different URL location for each operation. So either the document semantically tells you what you want to do (and you still need to dispatch to the right code) or else each location only performs one operation and you only send the data there. Actually the former sounds very like CORBA ironically. I.e the CORBA dispatcher sends the data to the right location of the business logic. ... But then so does the later. Wow CORBA is definitely not supposed to be RESTful but sounds like it. (I'm being factitious) The reason can you look at it both ways is that again at some point you need to dispatch a piece of data to a specific piece of code and at that point it becomes very "tightly coupled" no matter what the technology. Is a RESTful internet just a big ORB? Well I'm sure many of the REST folks would just say so.

I get back to my point. Everyone is getting bent out of shape about this and really it's just a matter of where the big string of data gets unwrapped and who wants to do the unwrapping. But when you do pick a particular technology to do your unwrapping then you take advantage of the capabilities of that technology. And in CORBA a runIT (in String blob) where there is a dozens of business functions hidden in the string, is definitely not taking advantage of the underlying infrastructure and a waste of money.

The big issue not whether a PUT or orderItem is better. That depends on the technology being used. If REST provides a great framework for maintaining, tracking, documenting etc. the real location and semantics of the operation then I think it's great. I can definitely see where REST makes a lot of sense. There is a LOT of waste and misunderstanding in other technologies. (There is a whole lot of complication in CORBA that I thought made it inaccessible to many developers and would drive anyone to a more RESTful state of mind).

Imagine if the anchor tag of a hyperlink didn't allow you to specify any source. Imagine if it had an implicit "GET" as the source. So on ay web page the only semantic you had was "GET" as the link to the underlying href. Hyperlinks would be rather boring and not very well documented. And that's my point the semantics need not only be there for the operation but also for understanding the way the business works. Where the semantics are depends on the technology used and how rich it lets you document those semantics.

I'm looking forward to understanding more about how that works in REST. Perhaps it's just brilliant. Remember I was the one advocating using RSS feeds as a service lookup repository instead of UDDI a few months back. Boy it doesn't take long to fall from grace.

Who is IPBabble

William Henry IPBabble is the personal blog of William Henry.

William has over 20 years experience in software development and distributed computing and holds a M.Sc from Dublin City University. He is currently working in the office of CTO at Red Hat, in the Emerging Technologies team. This weblog is not funded by Red Hat.

Posts are intended to express independent points of view, but understand that there is probably a bias based on the influence of working with standards based middleware for over a decade. (See disclaimer below)

January 2013

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    

Disclaimer

The views expressed in this blog are solely the personal views of the author and DO NOT represent the views of his employer or any third party.

About this Archive

This page is an archive of recent entries in the Web services category.

Virtualization is the previous category.

Find recent content on the main index or look in the archives to find all content.