Archive for June, 2008

BizTalk AS2 and EDI error documentation

Today while working on debugging some AS2 and EDI message tests, I encountered an MSDN area that I realized needs to be more heavily documented. Whenever I encounter an error number or code in the event log I have to go to a search engine and look it up to find out more information on the error. Many times the error codes are documented on 3rd party websites where Administrators can punch in a hex value to determine the error category. Today I was trying to get to the bottom of an EDI error and found the following MSDN listing of AS2 and EDI error details: http://msdn.microsoft.com/en-us/library/bb968036.aspx. If you go to this link the article itself will not have any subarticles linked below it. You will need to use the table-of-contents on the left frame to see all of the subarticles, which represent all? of the different errors you might receive when doing AS2 and EDI processing in BizTalk. Each of the articles for the errors also suggests some possible resolution strategies, which is really helpful. I just wanted to pass along this information for anyone doing diagnostics on AS2 and EDI processing in BizTalk because this is helping me enormously.

, ,

Leave a comment

PeopleSoft EDI Resources

I am currently working at a client who is integrating PeopleSoft EDI files over BizTalk to the GXS Trading Grid. From an investigation of the number of people who have integrated PeopleSoft with BizTalk, this appears to be a pretty big frontier. I wanted to post a few helpful resource links for working with PeopleSoft and EDI. Fortunately, there is some really good documentation produced by PeopleSoft about the EDI capabilities of PeopleSoft and the EDI formats that their products support. The most important resource I have found to date is the PeopleSoft Enterprise EDI 9 Peoplebook – http://download.oracle.com/docs/cd/B31513_01/psft/acrobat/fscm9edi-b0806.pdf, which includes descriptions on how to setup EDI inbound and outbounds transactions. This PDF book also gives a list of the individual X12 EDI format messages that individual products within the PeopleSoft Financials & Supply Chain suite can export. Its particularly interesting to see that different products within the suite can export different versions of EDI schemas. For example, an 850 message can be generated from the Purchasing application as well as the Sales application.
 
The PeopleSoft adapter as part of the Enterprise Applications adapter pack is another way to integrate between PeopleSoft and BizTalk. I will be posting more details on how integration with PeopleSoft goes and whether there are any gotchas to watch out for. Here is a tutorial for setting up the PeopleSoft adapter: http://msdn.microsoft.com/en-us/library/aa561134.aspx.
 
Thanks,

,

Leave a comment

The importance of IT knowledge for a BizTalk Developer

The past few days I have been working at a client that is ultra-secure and have been tasked with installing BizTalk with insane security. I have worked in environments in the past that religiously use the DISA gold disks (http://iase.disa.mil/stigs/SRR/index.html) had to follow the NSA security review documents. Another similar suite of security checkers is the Microsoft Baseline Security (MBSA) tool. The MBSA tool basically enables an IT administrator to lock down a system so tight that Microsoft server products are unable to run. In security alerts released by Microsoft, MBSA provides a way for administrators to harden a system through group policy settings, IPSec policy, among many other configuration settings. The security alert articles (like http://www.microsoft.com/technet/security/bulletin/MS05-051.mspx) even mention that modification of settings in this way will break certain Microsoft server products like BizTalk, SQL Server, MSMQ, etc. So I realized it was really important for people to know that if they are in an ultra-secure environment and are being asked to setup a Microsoft server product, you should defintely start with a baseline that was not already infinitely locked down and then slowly harden it while testing. Otherwise you will wind up going down a million rabbit holes trying to get connectivity working and may never make much progress.
 
In other words, make sure you have a test and development strategy that corresponds to a security baselining strategy. If you do not have more than one quality environment (your production version is your test version), then you will probably get stuck debugging security issues. Otherwise, be sure that baselines are created or established after doing functional testing rather than before. Some people will argue that its better to do it right the first time. I generally agree with this, but if you are in an ultra-secure environment where it is hard to know if a product will work at all due to the security, you will be better off knowing it will work first and then hardening the application over time.
 
Here is a list of some of the things I have had to do just to get BizTalk to install and partially configure:
 
Install network COM+, network DTC, Configure group policy to enable COM+, DTC to run on multiple servers, Configure DTC settings so that Remote Clients and Network DTC access exists, configure COM+ NTFS permissions at %windir%registration (see http://support.microsoft.com/kb/909444 for an example of ultra security – configuring the security beyond even Microsoft’s recommendations).

,

Leave a comment

Future of Send/Receive Ports

Today I attended a session on the Windows Workflow enhancements included with .NET 3.5. Two new WF shapes were introduced for sending messages to other workflows or WCF services. These two new shapes are the send and receive activities. Here is a link to a good article on these enhancements: http://msdn.microsoft.com/en-us/magazine/cc164251.aspx. In the BizTalk world, processes are defined using the orchestration designer and the send and receive parts of the process are defined on sides of the designer windows and are connected visually with lines that correspond to the send and receive activity shapes. Here is an example image which shows these lines: http://www.traceofthought.net/content/binary/btmsmq.jpg. From a visual perspective, if you have a lot of send and receive ports, especially in scenarios where you are sending lots of messages or multiple messages from a single send shape, there can be a lot of lines all over the design surface which can get messy very quickly. In fact, the lines can become a huge distraction in designing an orchestration. In .NET 3.5 the send and receive activities use arrows to denote the messaging direction rather than using connective lines. While the connective line does provide a visual cue in the orchestration designer whether the port is setup properly, I think not having the lines is much cleaner and more manageable.
 
The ContextExchange class was talked about and I got a glimpse of how context properties are receivable through the Inner Channel (Context Channel) of this class. This is an important BizTalk feature I was looking for a replacement feature for. Here is a good powerpoint that describes this: http://download.microsoft.com/download/5/8/1/5810d618-361f-4f47-943c-b20c0d420178/DEV340.ppt.
 
I also learned that the WF workflow instance Id is important for calling back into a long-running WCF service marked with the DurableService attribute. The thing to know is that this Guid value must be stored somewhere in a custom application because the WF infrastructure will not provide a built-in location for this. Use of this value when passed back into a method of a long-running process will activate the dehydrated (persisted) workflow instance.
 
Thanks for tuning in about my TechEd sessions!

, ,

Leave a comment

WCF the REST way

Today I attended a session on using WCF through the REST model of web service interactions. This remote method model is based on HTTP requests and includes the following HTTP actions: GET, POST, PUT, and DELETE. This model relies more heavily on the role of URIs as a way to more clearly define resource endpoints. Jon Flanders talked about techniques in defining well-known URLs based on the general experience most people have with the web and URLs. Resources in the REST model are useful because they more clearly identify entities within a business object model. So rather than have a service method called GetWidgets you would have a Widgets resource and send in a GET action to get the Widgets.
 
Recently, I have been reading through the book Learning WCF (by that Indigo girl) and it spends some time explaining the role of endpoint base addresses and full addresses. Base addresses are based on a configuration entry for the address and then a Service URI that indicates the service at which the base address then exists. Here is a link about the use of Base Address: http://www.dasblonde.net/CommentView,guid,756f2aee-7146-4bc3-8406-d0e3530dc507.aspx (the CSS styling is not working right on her site so select the text on the article to see the text.) I did not hear much in the REST session about the role of base addresses and full addresses, but it seems like this would be a more natural approach than building an extra long url like a Flickr url (http://www.flickr.com/photos/mzalikowski/530036109/) where the date is a meaningful URI and one of a couple different resource addresses possible with Flickr.
 
I think the REST model will serve to simplify the addressing schemes used with web services but I am a little worried about interoperability. For example, if a SOAP service exposes a web service endpoint that provides WSDL, it would also make sense to expose an endpoint that provides REST since it is a different invocation standard. WSDL has very widespread adoption, whereas REST is not implemented by everyone and could take a while before all vendors provide a compatible implementation. I am worried about web service integration that does not recognize PUT or DELETE operations and the possibility that these may be handled inproperly if a SOAP engine or HTTP engine processes these incorrectly. More to follow on whether I find any incompatibilities or gotchas with REST…
 
Tune in tomorrow for a post on the last day of TechEd. Thanks to anyone who has been reading my posts.

,

Leave a comment

Multiculturism and IT Certifications

Today at TechEd I attended a session on the techniques and policies Microsoft employs for managing the security of the Microsoft certification process. If you know me, I have quite a few certifications and am a staunch believer that a considerable burden of certification security is in the hands of the person seeking certification. I often get a lot of comments from people who for some reason believe that the problem of braindumps is exclusively a Microsoft corporation problem. Today I learned a little more about how a person’s worldview and culture can affect the way they perceive cheating like on IT certification exams. In college I studied missiology, which involves a considerable amount about cultures and the effect that worldview has on beliefs and truths. I do not want to make this post too philosophical and look at the values or ethics of cheating; but it was very interesting to me to hear someone from Microsoft talk about how situation ethics in some countries results in "ethical cheating" and how Microsoft approaches this on a business basis (Technically, this is not a PC post but it’s close).
 
The most interesting aspect of the session I attended was the way in which Microsoft was approaching the problem, which I found to be valuable. The problem was discussed as a cultural issue, which made sense to me. A lot has been written on IP (intellectual property) and the challenge of using IP in foreign legal contexts but it was quite informative to explore how company policies had to be updated to deal with multicultural issues. Definitely in certain foreign contexts where software piracy is a big problem, one of the key approaches involves education of end-users or consumers on IP, and the impact on the software environment in which IP is not respected. Similarly, education of the test taker becomes of chief importance. For these reasons, I assert that in so many ways the future security of certifications relies heavily on the test taker.
 
Last year when registering for a certification exam, the Prometric site had a bold text note that exams could not be registered at all for Pakistan and I had initially assumed this must have been a service outage due to an internet cable being severed or some other technical problem. Today I learned this was because Microsoft shut down all certification testing in this country for a time because of the problem of the certification security issues occuring there. The approach of closing test centers based on multicultural issues is huge. This example is related to the changes Microsoft has made to its certification program bans as well as test center closings in the US. Its interesting seeing things that occur in developing countries affecting Microsoft policies. I suppose this is another example of the nature of our global economy.
 
My apologies if we were too philosophical today, but I thought it was an important topic to discuss.

,

Leave a comment

Distributed Technologies for Embedded Devices

Today I had planned on taking a few more sessions on WCF and SOA but got pulled into the first session on Windows Embedded because an implementation for one of our Magenic clients was being discussed. In the course of the session, SOA actually came up as central to the current strategy and vision for Windows Embedded technologies. Recently, some Windows Embedded devices have become 32-bit an IP addressable which means they can handle more sophisticated operations and communication over addressable technologies. This was quite interesting and eye-opening to me because I typically classify Windows embedded technologies within the Windows Mobile or Compact Framework technology stack. The role of device-based technology became my focus for the day.
 
Later in the day I was exploring the Technology Learning areas at the conference where different technologies are represented by Microsoft employees and partners who can answer technology questions. The format at TechEd was useful because each technology included a mini-booth where there was a flat-screen that the expert could demonstrate technology rather than just describe it on a whiteboard. I walked around and talked to people representing BizTalk. I focused most of my time on the BizTalk RFID and Server booths but also talked with the Host Integration Server people as well. At the BizTalk RFID booth I was asking questions about the role of standardization within the RFID platform and I learned that RFID devices are basically standardized on frequency and communication technology and BizTalk RFID basically takes advantage of this standardization to provide integration capabilities. I had heard RFID discussed at the Business Process/SOA conference back in October 2007 but it did not really take hold in my mind how this could be useful. One partner company representative from Cathexis (http://www.cathexis.com/about-cathexis/partners/biztalk-rfid.aspx) demonstrated the use of a handheld device and an RFID scanner which illustrated to me how easily WCF or other distributed technologies could easily be connected to an RFID application. So after these sessions I was very interested in how mobile or embedded devices could act as clients within a distributed data model which was enabled through BizTalk, WCF, and SOA.
 
At the end of today I attended a session on parallel computing which describe a new Microsoft product called Windows HPC (http://www.microsoft.com/hpc). In BizTalk I am frequently asked about how to properly design the best cluster or handle vertical or horizontal scalability. When BizTalk is concerned, you have internal BizTalk cluster design through hosts as well as all of the existing Windows cluster stack and NLB to take advantage of. With multi-core processing and Hypervisor virtualization, even more options for application partitioning are now available. It was interesting to hear a classical software engineering perspective on the topic of parallel computing and that Microsoft is making strides to improve its cluster product set over the existing Windows cluster product. I anticipate that the role of parallelism may occupy a BizTalk setting much like other BizTalk tuning or throttling in future versions of BizTalk.
 
Tune in tomorrow for more details from Tech Ed sessions!

,

Leave a comment

BizTalk and the Cloud

Today I was at the pre-conference part of TechEd in Orlando. I attended the WCF/SOA overview by Juval Lowy and picked up a few interesting details. In my personal learning I have been working through the Learning WCF book by that Indigo girl (http://www.oreilly.com/catalog/9780596101626/) and have been working through it on my train ride into Chicago. Lowy has another book in O’Reilly series and I found the content of his talk today to be roughly parallel to the content of the Learning WCF book. The Learning WCF book targets a lower audience level than the Programming WCF services book that Lowy has (http://www.oreilly.com/catalog/9780596526993/index.html) but I wanted to have a better foundation on WCF. Ok, enough of the rambling.
 
So a few topics on WCF that I thought were most interesting. Lowy talked about the DurableService attribute, which can be added to a WCF service implementation to provide some of the persistence typically associated with WF in .NET 3.5. This is a closely related feature to BizTalk’s concept of orchestration dehydration. For more information on DurableService see (http://weblogs.asp.net/gsusx/archive/2007/06/14/orcas-durable-services.aspx – unfortunately is an Orcas post so no guarantees). DurableService enables a WCF service to function as a long-running process, similar to BizTalk long-running transactions. It is very interesting to see a couple of different options available for service persistence and the flexibility to choose to either use this via WCF or WF or both. In BizTalk design you might typically use a tier of servers for processing messages received through adapters, and this capability of separating the service persistence in either WCF or WF means that it will be possible to separate a service across more than one physical server and have WCF used on a separate physical tier while maintaining process persistence.
 
Here at TechEd there are many hands-on-labs (HOL) available running concurrently while the other conference sessions are occuring so if you want to dive into something you can jump right in. I was looking at a HOL on Developing Workflow Services via VS 2008. This showcased further .NET 3.5 technology which goes a long way towards replacing the business process functionality of BizTalk. I was amazed that it is possible to expose a WF process as a WCF service and it was very interesting to hear that when a WF sequential process calls a WF state-machine, you can use correlation to coordinate messages between the processes. If you are experienced on BizTalk you can slowly see Microsoft introduce technologies that will eventually replace BizTalk functionality and its interesting determining which ones match BizTalk functions. One area I have been wondering about until today was which WF or WCF technology would handle message correlation for the various forms of message exchange patterns in which correlation is required. This HOL shows how to handle service correlation which should match the BizTalk functionality as long as the integration partner exposes a WCF endpoint.
 
Overall, it has been very interesting today. I will be continuing to post throughout my time here so check back later! Now its time for me to get some food. Bye!

, ,

Leave a comment