Wednesday, December 23, 2009

A new year approaching

With another week to go and the Cristmas hollidays coming, it's time to reflect on the past and coming year. I think 2009 was a good year for AX 2009 despite the product beeing released in 2008. The main reason for 2009 beeing a good year for AX 2009 is mainly based on the Gartner report positioning AX 2009 the number one (#1) leader in the magic quadrant for what we call SMB here in Norway. This could of course be marketing hoopla, but if you read the report, Gartner pays special attention to "Microsoft delivering on their vision". Personally I find this interesting and I suggest that every partner study the current Roadmap and Statement of Direction (SOD) to see what Microsoft is planning (visionary) for AX in the future. In my mind, AX 2009 did'nt bring a lot of news on the techology side (the new batch framework and the support for UTC is perhaps the most important improvements), but overall MS managed to position AX as a real challenger and maybe also a winner in the combat against SAP and Oracle/PeopleSoft.

Based on the number of hotfixes released for AX 2009 and SP1, MS still has some challenges regarding quality. Maybe a slowdown in the release cycle would be a good advise and maybe also broaden the early adoption program to gain some more experience from the field. Again this is really not special to AX, since we have to remember that all software is manufactured by humans and no human is free of errors. All in all I think AX 2009 was a great step forward at least with regards to technology and architecture, since the product is well positioned to compete with the biggest rivals even in the upper, right quadrant. Finally "MS got rid of most of (every?) not so industry standard implementation as expected" and lifted the product up to the MS level of integration (still some ground to cover, but yet greatly improved).

So it's exiting times generally and with regards to AX. I'm positive about the year to come and also the next decade. From the Roadmap and SOD, I read a steady growth both vertically and horizontaly with regards to functionality. We are already seeing some evidence of this with MS buying verticals from partners (like the POS vertical from a Danish world wide partner financing? a rather special twist in buing a highly specialized ERP solution called Guideix A/S). The future story of this evolution of one of my previous employers, will be very interesting to follow. Driven by market, customers or product/technology? Who knows...

Anyway I'm optimistic about the future of AX and I seriously meen that AX 2009 brought AX closer (really close) to what the product has been marketed as since the beginning (Damgaard back in 1998 - more than 10 years ago). At the same time I have to admire the Damgaard brothers for the introduction of a true 3-tired solution (was it 2.5?) that MS now has adopted and brought forward to a pure 3-tiered solution with no other choices. Add the rather annoying AOCP beeing replaced with MS RPC in AX 4.0 and you get the picture.

One final message to all x++ developers out there, is to take the time to study and understand the difference between the different caching schemes available in AX (look at the DEV III course documentation or Inside AX 4.0/2009 book). This is key knowledge to utilize the 3-tier architecture in AX 4.0 and 2009 (also valid for earlier versions when running in 3-tire thin configurations) and at the same time, rather complicated. From my experience, this is an area that get far to less attention and that has a big potential for optimizations (and performance gains).

Without digging deeper into the philosopical area, I would take the opportunity to wish my few (but valuable) readers, a happyy Christmas and an interesting new AX year (and decade)!

See you in 2010! So long...

Wednesday, December 16, 2009

DAS or SAN for SQL Server part II

I thought a follow up was adequate for this one. After looking for some more views, oppinions and recommendations, I found the following posts (3 part) on the blog for the Microsoft SQL Server Development Customer Advisory Team:

Deploying SQL Server with SAN #1
Deploying SQL Server with SAN #2
Deploying SQL Server with SAN #3

Tuesday, December 15, 2009

AX Application DBA

Yesterday I recieved a newsletter from SSWUG titled
Are You an application DBA?
My first reflection was that I have been and that I partly still is an application DBA for AX (and also some Axapta solutions). My second thought was that this is an interesting question, since I very often see a total lack of DBA responsibilities and naturally even less DBA's that focus on AX. When I meet someone with a DBA role, it's often in relationship to hosted solutions (outsourcing) and these guys are general DBA's focusing on "the big picture" (at best).

One example was a customer with a consolidated SQL Server solution counting almost 60 user databases supporting the same number of applications. When facing these kinds of situations, the potential for optimizing the AX database is rather small. Take tempdb as an example. AX 4.0 and later benefits from Read Committed Snapshot Isolation as a consequence of introducing versioning and optimistic locking. This again increases the load on tempdb, since the version store are held in tempdb. Since tempdb is a system database shared between every database on the same instance, the total load on tempdb again impacts AX. And I can say that different applications uses tempdb differently and often not following best practice. Mixing OLTP and OLAP load on the same instance or database server, is another classic.

This is some examples of when you need to know how the application uses SQL Server and maybe not something a general DBA would pay attention to without looking into how each application actually utilizes tempdb. Another example is locking and lock escalations in other applications impacting for instance AX. Add databases consuming a considerable amount of CPU in combination with high I/O load.

So in my oppinion it's a great need for application DBA's and AX is a good example since AX is a ERP solution (very often mission critical). I would bet that all customers running AX would see a good return on investment (ROI) hiring an application DBA for AX responsible for pro active maintenance and follow up on queries not performing at the full potential, long running queries etc. But the economic downturn probably impact the willingness of customers spending money on this and this again justifies for calling in people like my self for short term activities (fire fighting). This is a bit of a paradox to me...

Friday, December 4, 2009

AX and virtualization (to do or not)

I have only done one completely virtualized implementation of AX 2009 (including SQL Server 2008 EE x64). This solution is not yet in production, but in the meanwhile I have looked at several other virtualized implementations done by other partners and often operated by another vendor (hosted).

One of them (AX 4.0) consists of 2 AOS servers in a normal AX load-balancing cluster. The interesting part in this setup is that one server is a dedicted, physical server with DAS, while the other is virtualized (ESX 3.5). When talking to the users, they complain about overall performance when having sessions against the AOS instance on the virtualized server. I suspect that the most clever users always looks at the caption in the main AX window to see which AOS server they hit (and maybe they also start one additional session immediately to hit the physical one followed by closing the first session against the virtual). Everything seems ok from the server console (PerMon, TaskMan, Event log etc.), but I also feel (subjective) that the loading of the AX client also takes some more time when hitting the virtualized one.

Again I have to talk about complexity. Ideeally AX AOS should be the perfect candidate to virtualize since it's a CPU, memory and network intensive process. The AOS server will never pay a high physical I/O load and disk I/O is in general low (of cource some batches could impact this in AX 2009). And AX 2009 is now fully supported on several virtualized platforms. So exactly why do I bring in complexity again? As with Storage Area Networks, a virtualized environment is simple to utilize (when it's working as expected), but the technology and the levels of infrastructure behind adds a lot of potential sources when things are not running as expected (most virtualized environments also utilizes a SAN). I guess the odds for getting down to all the needed details is a lot better when implemented at a customer site, but customers nowdays tend to outsource this since "it's not part of their core business". In this scenario, the complexity is very visible since the AX partner (or the consultant doing the performance audit) don't even get all the details or access to the parts of the system necessary for defining the big picture. Seen from the hosting partner, it's all about utilizing the underlying platform as much as possible trying to maximize the potential of the platform. This often means mixing customers on the same platform (logically isolated on every level), but ulitmately sharing the exact same resources at a certain level. This again typically leads to traditional bottlenecks, but they are well hidden for both the customer and the man in the midle.

So what's the lesson here? Always differentiate between hosted solutions and locally implemented ones! The main principles are the same, but it's cruical to bring in the hole army of partners in the planning when the solution is operated by a hosting partner. Without this, you are basically left on your own and you'll probably never be able to see the big picture consisting of all the details, to judge where the reall issues causing the problems are hiding. I guess this is true for every application or solution.

Wednesday, November 18, 2009

DAS or SAN for SQL Server?

Performance issues is always complicated, but a common approach is possible (in fact the only way to work through this is to define the problem area and structure the different factors into groups). In my approach the operation platform is one key group while the database is another. Based on the work I have done since September, I have found it very hard to get real and trustworthy information about the utilization of SAN storage when this is used for SQL Server. And as a advisor I don't have deep knowledge about the solutions from the indivdual SAN vendors (expertice and experience required). In my mind the old rules about separation of physical I/O from the I/O characteristics still applies despite the fact that SAN's normally have high end controllers capable of handling huge I/O loads. DAS (Direct Attached Storage) as the traditional way of providing storage, seems to be much easier to work with when analysing performance issues at the database level mainly because good old Performance Monitor will tell you all you have to know to conclude and defined possible counter measures. This is NOT the same for SAN's! Since a SAN mainly is used for storage provision and consolidation, you have to work your way through the hole SAN setup and identify every piece of software that is generatig I/O against the SAN. Add some interconnect (normally fiber switches, HBA's etc.) in between and you have multiplied the complexity by at least PI.

Add another overhead for hosted solutions where several customers normally share the same infrastructure and the picture starts to be rather though to control. During my search I found another blog discussing the same issues and his summary resembles much of my own experience.

So this is yet another example of how developments in technology actually complicates the everyday work for both customers and consultants, and this is clearly something to account for when doing the TCO matrix.

Based on this, customers should evaluate SQL Server storage and consider to implemented database storage as DAS. Do you agree?

Friday, November 6, 2009

Current challenges and issues

In my new role I have been busy working with a couple of hosted solutions experiencing various stability and performance issues. It’s easy to conclude that hosting AX solutions externally is demanding and that it requires a lot of attention from the customer to establish a good working relation between the hosting partner and the AX partner.

Even more important is the ability to regulate the responsibilities between the different parties. Simple things like for instance database maintenance must be defined in a way that separates regular maintenance not requiring AX knowledge and the opposite. Typical maintenance tasks like reorganizing and re indexing falls into the first category since it can be performed by any DBA (no changes of the definitions or design). When it comes to maintenance requiring AX knowledge (requiring changes in tables node in the AOT), it’s as important to makes sure the AX partner has the necessary access to SQL Server to be able to utilize all the valuable information provided in the form of Dynamics Management Views (I’m not mentioning Oracle here, since it seems like SQL Server is the dominating RDBMS for AX at least here in Norway).

SQL Server 2005 brought a lot of good news in this area and SQL Server 2008 took this even further with the introduction of a brand new Activity Monitor and the Performance Data Collection. Developers should in my opinion pay more attention to the load that their customizations put on the database server and examination of query plans, should be obligatory before releasing changes in the production solution.

In addition to the simple examples mentioned above, my experience says that even if each solution has different characteristics, some common areas can be defined to guide the approach. Without going into the details, the following summarizes what I normally define as the key sources for each category:

Stability
• Network related issues (AOS -> database, AOS -> application share)
• Operational knowledge (description of the relationships between the different server roles in the AX solution and routines describing how to perform a controlled stop – start sequence)
• Pro active maintenance of the AX application and kernels (implementing hot fixes and roll ups for current SP level, new SP)

Performance
• Physical disk I/O at the database level (separation and isolation, Raid levels, sector alignment, block size) regardless of DAS, SAN, NAS heads etc. (general rules apply)
• General load on database instance and utilization of system resources (both OS and SQL Server internals)
• Database configuration, usage and maintenance (best practice configuration, indexes, transactions, space allocation, index maintenance)
• Customizations (caching, run on, query width and selectivity)
• Identify and implement hot fixes related to performance issues (both application and kernels)

While talking about database configuration, I would very much recommend the Check lists that the AX Performance Team has compiled and published on their blog. This is vital information for everyone involved in installing and configuring AX, but also as a general source for hosting partners and customers. The best thing is that this is based on experience and best practice, and in my opinion the value of their recommendations goes beyond AX (general for OLTP applications).

In addition to working with troubled solutions (time consuming, but very motivating), I have implemented “my first” AX solution in a pure virtualized environment including SQL Server 2008 Enterprise. I’m excited about this solution mainly because it’s always interesting to make new experiences. Many people have probably walked this path already, but it’s a first time for everything.
This finalizes my first blog entry in several months, but I will be paying more attention to my blog from now on.

Happy weekend!

Thursday, June 25, 2009

Opportunities

This time I'll post a short professional status - I'm currently unemployed. My company decided to end the Dynamics business after 2 years. Not the nicest entry to the summer holliday, but at least a opportunity to get close to the Norwegian Dynamics business talking to customers, partners and Microsoft in a new setting. This business seems to be going very well at the moment despite the economic downturn. One of my reflections is why my company decided to end this business when the market is very positive around both AX and CRM... Without going into details, ERP in general seems to be a challenge for my company and paired with the history of the company, it's quite obvious that ERP was never a part of the culture. This is always difficult to uncover during the interviews and you need some time within a company to really see this.

I hope to keep this blog going and to have a closure of my next professional move as soon as possible followed by a long summer holliday together with my family.

Happy summer to the WWW!

Update 09/06/26:

I'm closing the hunt and I'm signing a new work agreement early next week. And this blog will still be alive after a long summer holliday. Stay tuned for updates in August/September.

Tuesday, June 9, 2009

Gartner Magic Quadrant

In a recently published research report (June 4), Gartner places Microsoft Dynamics AX as the leader in the Midmarket and Tier 2-Oriented ERP for Product-Centric Companies.
"Gartner concludes that only one offering qualifies as a leader in the
market at this time: Microsoft Dynamics AX."

This is good news for everyone involved in both using and delivering services around AX, especially if you are working with AX 2009. I have been blogging about some of the features and changes in AX 2009 for some time and I find my own oppinion to be aligned with what Garter expresses around the technological aspects.

After reading the report, I thought about my experiences with Axapta/AX and I found it worth summarizing my history with Axapta and AX to put the product envolvement in a subjective perspective (from Damgaard via Navision to Microsoft as software vendor):

I first looked at Axapta 2.1 in 2001 and back then, it was considered to be a product with quite rough edges (a very young product born around 1998). The company I worked for at this time, decided to wait until version 2.5 before doing the first implementations. We got a lot of experience from these implementations and discovered that the product still had some rough edges (for instance the returning issues around the famous axdat.udb file espesially in solutions with clustered Application Object Servers). Then we got Axapta 3.0 adding more functionality. Axapta was already branded as a true international solution, but the requirements for doing a central implementation supporting users in different time zones, was driving the requirement for the number of AOS licensed heavily and such solutions did'nt support access to the data stored in the AX database across time zones (it was at best Unicode enabled if you remembered to enable this before synchronizing the database for the first time). This was also in the same period where Microsoft bought Navision. After using more time on finding work arounds to technical issues compared to bringing value to the customers, I decided to do something else for the next 3 years (I was a little bit fed up to be honest). When the opportunity to return to what then was called AX (4) back in 2007, I first considered the architectural changes (eliminated the axdat.udb file and put the license/session handling into the database, replaced the AOCP protocol with RPC, buildt a new Web application based on SharePoint, a pure 3-tier architecture and a greater range for the very important RecId value) and found it very promising. Based on this, I decided to give AX a second try. My experience with AX 4, proved that AX 4x was a big step in the right direction with regards to architecture. Now we only lacked support for handling users accross different time zones on one (or several) AOS instances with scaling and redundancy beeing the drivers for the number of AOS licenses required. This was finally introduced in AX 2009 and for the first time, I considered AX to live up to the promise of beeing a true, international solution. Add even tighter integration to other Microsoft products and technologies, and a Web Application "bringing BI to everyone" through a role based Enterprise Portal, and AX 2009 was positioned to compete with the other 2 main rivals also in the Enterprise Market.

I don't regret returning to AX and I'm rest assured that AX 2009 and later releases, will move further up and to the right in the Magic Quadrant "cementing" it's position as the most agile and TCO effective ERP solution on the market. SAP Business One is of course a serious player and it will be interesting to see how the competition evolves the next years.

So what's my point here? Given the history of AX and the fact that Microsoft now has done the necessary and required changes with regards to the architecture (not a small task), the product enters a new era. Companies looking for a new ERP solution, should indeed evaluate AX 2009 in line with both SAP and Oracle. And existing AX customers running on a version prior to AX 4, should work through their exisiting solution either aiming at eliminating as many customizations as possible or in fact re implement AX 2009 with a clear strategy around utilizing standard functionality to lower TCO over time and keep in pace with new/added functionality. It's in my oppinion all about positioning for a very exiting product cycle where I expect a lot of new functionality to be introduced (both horizontal and vertical) and less architectural changes!

Some information about the next release of AX (6) is already available (AOD files moved from file system to the database, increased range for ObjectId etc.). Some will probably argue that this is architectural changes, but as I see it yet not ground ground breaking compared to the changes already implemented in AX 4 and AX 2009. If you have access to the Product Roadmap, you can read what Microsoft is planning for the future and my final word, is the fact that Gartner concludes that Microsoft is delivering on their vision and that this is one of the key reasons for Gartners conclusion!

Happy reading.

Mounting the AX 2009 Demo VPC under Hyper-V

I recently was asked to make sure that we could demonstrate AX 2009 outside MS Virtual PC 2007. I admit that Virtual PC is not the ideal virtualization environment for demos especially if you don't have a secondary hard drive with sufficent performance (a fast disk) and sufficent memory resources. Since most people carry a standard business configured laptop computer, it's hard to run a virtual machine with satisfactory performance.

The other issue is whether to utilize the standard demo VPC's (Microsoft has 2 demo VPC for AX 2009) or to create your own. Since setting up and configuring a complete AX 2009 solution has become an hour intensive task due to the number of components and supporting software required, I decided to use the standard VPC provided from Microsoft (17 files for download for demo VPC 1 aka AX-SRV-01).

An additional requriement was that users should be able to connect to the virtual machine remotely.


Here are the tasks I perfomed to provide the standard demo VPC under Hyper-V:


  1. Downloaded 17 files for demo VPC 1 on my laptop

  2. Exctracted the VHD to my laptop

  3. Mounted the VHD under Virtual PC 2007 and started the VM with public networking

  4. Updated the VM with the latest security updates and verified that the firewall was running

  5. (Now I could have uninstalled the Virtual Machine Additions, but I decided to to this after mounting the VHD under Hyper-V)

  6. After stopping the VM, I copied the VHD to the Hyper-V host (approxemaetly 30 Gb)

  7. Mounted the VHD under Hyper-V and started it through the Hyper-V console with public networking (no networking will be enabled until Integration Services is installed)

  8. Uninstalled Virtual Machine Additions and restarted the VM (some creativity was needed at this stage to be able to access Windows Explorer)

  9. Installed Hyper-V Integration Services and restarted the VM

  10. At this point the VM was fully operational including Remote Desktop Connections , but none of the configured Web Sites was working...

  11. After investigating the Web Site configuration, I noticed that they where configured to use host headers which led me to further investigate the DNS configuration. Without beeing a DNS expert, I concluded that the relationship between the defined DNS records and the original IP configuration (192.168.0.1) on the one side and the forward lookup zone on the other, was important. I did NOT want to tamper or alter the DNS configuration to avoid a lot of reconfiguration (remember that the server runs several roles including Domain Controller).

  12. After stopping the VM, I created a new private virtual network (Hyper-V networking) and I added a second network card that I bound to the newly defined network

  13. After starting the VM once again, I defined a static IP (192.168.0.1) on the new network interface and voila - all Web Sites where operational and accessible again without any reconfiguration.

I allocated 3 Gb of memory and one virtual CPU to the VM under Hyper-V.

The overall performance of the VM was supprisingly good even when executed on a Hyper-V host with local disks (no SAN) and sharing resources with a large number of other virtual machines.

The major limitation with this setup, is the number of concurrent RDP sessions (WTS running in admin mode).

Friday, May 8, 2009

Watch out for the combination of reports as PDF with graphics and batch execution

We have seen this issue in our design and finally Microsoft has now verified (newsgroup microsoft.public.axapta.programming) that AX 2009 has an issue when you use the buildt in PDF functionality for reports containing graphics (logos etc.) and the logic executes as part of a batch job. The reason is that the PDF AX logic uses the Image class to handle the graphic and the Image class is in fact (documented) bound to the client tier. So this is an impossible combination and in my oppinion, a good example of issues in the shadowlands of the changes in the Batch Framework and the default application logic provided by Microsoft.

The solution is to use a PDF printer driver to produce the PDF file, but this should'nt really be necessary since AX has built in support for PDF generation. If you have read some of my earlier blog posts, you will probably notice that we have experienced some major issues when running the application logic in batch under the new Batch Framework and the lesson learned is that you should pay extra attention and put in extra effort to make the batch jobs run as expected.

My best tip is to plan for this as an extra test activity during build (or as an additional payload during testing) and reflect this in the estimates given to the customers. I conclude that the comlexity is rapidly increasing as a consequence of a very good architectural change in AX (server bound batch execution and impersonation), but the application lacks some conditional checks and maybe also some best practise checks in the compiler ("Dear developer, you are trying to call a class on the server that is bound to the client. Please consider your design and look for alternative solutions").

Thursday, April 30, 2009

Staying on top of things - AX 2009 hot fixes

To be honest, I find it very hard to keep track of the available hot fixes (and cumulative rollup packages). By very hard, I'm mostly thinking about the time spent on 1) searching for possible hot fixes and 2) if one possible related hot fix is found, understanding what the hot fix actually solves.

Partnersource (or Customersource) is the primary source for this kind of information by searching the Knowledge base. Maybe it's only me, but I would like a much more effecient tool and much more detailed information about the individual hot fixes to keep our customers happy and of course spend as little time as possible on this activity. The real trouble arises when you find KB number without a published KB article. What do to in these situations? The answer is to ask for unpublished information through your support channel and hereby allocating even more time and resources.

So people; take the opportunity to vote by answering the recently published poll on the right hand side in this blog.

I promise to share the results with Microsoft.

Bottom line is that it's all about quality of the software by easily providing current information about available hot fixes to the partners (and customers).

Thursday, April 2, 2009

Debugging Batch Jobs

We have a hard time getting the AX 2009 debugger to work when breakpoints are set in code executing in Batch Jobs. Microsoft has described this in several documents without any special requirements, but we are unable to get the debugger to work as expected. All settings are correct in the server configuration and in the client when the breakpoint is defined. We have also tried to use the keywork breakpoint in the code without any difference. It seems like the debugger has a hard time attaching to the process. We have tried this in several environments and solutions, and in different combinations, and basically we are stucked.

The fundamental changes in the Batch Framework requires more debugging than ever due to the increased complexity inherited from the changes around impersonation (RunAs) and the prmiary tool in this situation is in fact the debugger.

We have also checked with several other sources and they have done the same experience.

All experience around this issue is highly welcome.

Update 2009/04/03:

Microsoft has confirmed that they have received a problem report (4652) for this issue, but no solution or fix is available yet. The problem was reported in January 2009.

Monday, March 30, 2009

A few observations 3 weeks after GO LIVE

Ok, now I've been part of a Go Live for AX 2009 Sp1 and I have gained 3 weeks of operational experience. It's to early to draw any definitive conclusions, but this is a short summary of my key observations in random order:

  • The decision to go for Windows Server 2008 x64 Standard was right
  • The decision to go for SQL Server 2005 Sp2 x64 Standard in a active-passive failover cluster was probably right, except for some breaks in the database communication (exact reason still unknown)
  • AIF performs better than expected even without utilizing paralell processing on several AOS instances (2 000 messages can easily be processed in a couple of minutes utilizing one uni directional channel)
  • The BizTalk Adapter haven't caused us any spesific issues so far
  • Pay extra attention when submitting batch jobs to the batch queue (or when constructing new batch jobs) and don't expect all logic to execute automatically under the new Batch Framework (look out for logic tied to the client tier even in the standard application and plan for adding some extra logic to keep the code compatible in both interactive and batch mode)
  • Look out for hot fixes from Microsoft (check Partnersource or Customersource freqently) and plan for some delays getting a response (new installations should evaluate the roll up package released late in February)

Bottom line; the Batch Framework is worth paying special attention to during analysis, design, build and testing.

Thursday, March 12, 2009

LIVE, update

Well we had a breakthrough this morning! At least the batch execution now seems to run as expected. We will conclude after running the batch through the night and tomorrow. Anyway all news are good news right now.

After investigating the AOS server used to run the AIF batch(es), we discovered some deviations in Performance Manager (high number of page faults and hence PF Delta). This led us to change the Batch Group and switch to another AOS server. After this we have been able to automatically process inbound and outbounde messages. If the batch survives the night and tomorrow, we will take actions on the faulting AOS by unistalling/cleaning up and installing a new AOS instance. The only load on the suspicious server right now, is generated by the BizTalk Adapter (.NET Business Connector).

After spending long days in the office since last Friday, I'm heading home more optimistic than ever.

Wednesday, March 11, 2009

LIVE

Ok, we have been LIVE since last Friday and I thought it was time for a short update.

Basically things have gone quite well, except for a lot of problems with AIF and message processing. We have identified some issues with customizations, without coming close to solving the biggeste issue: Batch execution of the AIF message processing services. The wierd thing is that they work for some time and then suddenly they stop working, and not leaving a trace of evidence of what went wrong (just sets the status to 'Error' without any error message in QueueManager or in the Exception log). We have done a lot of debugging but this is very time consuming and the underlying logic is hard to follow. We have set up the Batch Groups as per MS definition routing the batch execution to the correct AOS server (2 non clustered) and we have tried a lot of different setups for the Batch jobs with and without dependencies, without any success. The event logs are looking good and the overall utilization of the server resources, are under control.

Part of the complexity is tied to the impersonation logic (RunAs), but this seems to be under control after some issues the first days (permissions). We have ended up implementing a manual processing routine that bypasses the impersonation logic to allow debugging. The manual routine works well, but needs a dedicated resource to act as Queue Manager (not a good solution, but we are able to keep the queue in a controlled state).

I can't go into further details right now, but the new Batch Framework is causing us some pain and this is not what we where hoping for. When the batch jobs run successfully, the performance is acceptable and we are able to process a decent number of messages in any direction. To add some more mystic to the issue, we sometimes have the inbound processing running without problems, while we have problems with outbound processing. Suddenly this changes without any clue or trace of what happened. And we are monitoring the AIF lock table closely and we are also looking for database locks, without seeing any issues in these areas.

So the haunt continues and you can expect some more updates about this later when the issue hopefully is tracked down and solved.

If anyone have some more on this, you are welcome to leave a comment (I'm still optimistic).

So long

Wednesday, February 4, 2009

The haunt for knowledge

This is a follow up of the post regarding RPC errors.

First step to obtain a basic understanding of how MSRPC works can be found here. Microsoft has implemented their own version of RPC that they call MSRPC. Wikipedia puts the MSRPC implementation into a historical perspective and relates it to the source spesification called Distributed Computing Environment (DCE) from the Open Software Foundation.

I haven't yet found any specific information about how the Client- and Server Stubs are implemented in the Dynamics AX 3-tier architecture, but I'll try to dig a little bit more into this.

In addition to this, Microsoft recently also have published some KB articles on Partnersource that throws some more light on the issue I initially was blogging about. I'll try to find the URL's and update this post with the direct links when I get the time.

Update 2009-02-18:

Valuable links

Technet Magazine (article written by Zubair Alexander)
MSDN (Error Codes 1700 - 3999)
Florian's Weblog

Wednesday, January 28, 2009

Approaching GO LIVE

For those of you that have read my prior posts regarding AX 2009 and AIF with the AX BizTalk adapter, we have been working with our first implementation since June 2008. Without going into the details, we are approaching GO LIVE and the latest configuration is done this week such as defining the final endpoints. We basically have a solution with a set of front line services (FTP) in the perimeter network and BizTalk Server 2006 R2 togheter with AX 2009 in the local network tied together with some middleware. Not revolutionary or innovative, but a simple, cost effective and robust solution based on proven technology.

The test results so far are good and we are ready to GO LIVE tying a lot of trading partners to the client. We will have sales ordres, purchase orders, packing slips and picking lists flowing, in addition to invoices in different shapes and flavors. First phase is roll out in one country and two additional roll out phases are planned for the next months. By roll out in this context, we talk about markets with a set of different trading partners for each market.

Stay tuned for general updates!

Monday, January 26, 2009

Microsoft Dynamics AX 2009 AIF BizTalk Adapter Configuration White Paper

Finally Microsoft has released a White paper explaining how to configure AIF together with the AX 2009 BizTalk adapter! You can find it here.

This documentation is clear and right to the point compared to the corresponding White paper for AX 4 that is all we have had available until now. It also touches the necessary batch jobs for supporting the out- and inbound message flow (async/sync).

Well done Microsoft; let's hope for more useful and formal documentation for AX 2009! This will avoid a lot of R&D activities and increase the quality of the product. We also need best practise documentation outside the scope of the compiler :-)