With another week to go and the Cristmas hollidays coming, it's time to reflect on the past and coming year. I think 2009 was a good year for AX 2009 despite the product beeing released in 2008. The main reason for 2009 beeing a good year for AX 2009 is mainly based on the Gartner report positioning AX 2009 the number one (#1) leader in the magic quadrant for what we call SMB here in Norway. This could of course be marketing hoopla, but if you read the report, Gartner pays special attention to "Microsoft delivering on their vision". Personally I find this interesting and I suggest that every partner study the current Roadmap and Statement of Direction (SOD) to see what Microsoft is planning (visionary) for AX in the future. In my mind, AX 2009 did'nt bring a lot of news on the techology side (the new batch framework and the support for UTC is perhaps the most important improvements), but overall MS managed to position AX as a real challenger and maybe also a winner in the combat against SAP and Oracle/PeopleSoft.
Based on the number of hotfixes released for AX 2009 and SP1, MS still has some challenges regarding quality. Maybe a slowdown in the release cycle would be a good advise and maybe also broaden the early adoption program to gain some more experience from the field. Again this is really not special to AX, since we have to remember that all software is manufactured by humans and no human is free of errors. All in all I think AX 2009 was a great step forward at least with regards to technology and architecture, since the product is well positioned to compete with the biggest rivals even in the upper, right quadrant. Finally "MS got rid of most of (every?) not so industry standard implementation as expected" and lifted the product up to the MS level of integration (still some ground to cover, but yet greatly improved).
So it's exiting times generally and with regards to AX. I'm positive about the year to come and also the next decade. From the Roadmap and SOD, I read a steady growth both vertically and horizontaly with regards to functionality. We are already seeing some evidence of this with MS buying verticals from partners (like the POS vertical from a Danish world wide partner financing? a rather special twist in buing a highly specialized ERP solution called Guideix A/S). The future story of this evolution of one of my previous employers, will be very interesting to follow. Driven by market, customers or product/technology? Who knows...
Anyway I'm optimistic about the future of AX and I seriously meen that AX 2009 brought AX closer (really close) to what the product has been marketed as since the beginning (Damgaard back in 1998 - more than 10 years ago). At the same time I have to admire the Damgaard brothers for the introduction of a true 3-tired solution (was it 2.5?) that MS now has adopted and brought forward to a pure 3-tiered solution with no other choices. Add the rather annoying AOCP beeing replaced with MS RPC in AX 4.0 and you get the picture.
One final message to all x++ developers out there, is to take the time to study and understand the difference between the different caching schemes available in AX (look at the DEV III course documentation or Inside AX 4.0/2009 book). This is key knowledge to utilize the 3-tier architecture in AX 4.0 and 2009 (also valid for earlier versions when running in 3-tire thin configurations) and at the same time, rather complicated. From my experience, this is an area that get far to less attention and that has a big potential for optimizations (and performance gains).
Without digging deeper into the philosopical area, I would take the opportunity to wish my few (but valuable) readers, a happyy Christmas and an interesting new AX year (and decade)!
See you in 2010! So long...
Wednesday, December 23, 2009
Wednesday, December 16, 2009
DAS or SAN for SQL Server part II
I thought a follow up was adequate for this one. After looking for some more views, oppinions and recommendations, I found the following posts (3 part) on the blog for the Microsoft SQL Server Development Customer Advisory Team:
Deploying SQL Server with SAN #1
Deploying SQL Server with SAN #2
Deploying SQL Server with SAN #3
Deploying SQL Server with SAN #1
Deploying SQL Server with SAN #2
Deploying SQL Server with SAN #3
Tuesday, December 15, 2009
AX Application DBA
Yesterday I recieved a newsletter from SSWUG titled
One example was a customer with a consolidated SQL Server solution counting almost 60 user databases supporting the same number of applications. When facing these kinds of situations, the potential for optimizing the AX database is rather small. Take tempdb as an example. AX 4.0 and later benefits from Read Committed Snapshot Isolation as a consequence of introducing versioning and optimistic locking. This again increases the load on tempdb, since the version store are held in tempdb. Since tempdb is a system database shared between every database on the same instance, the total load on tempdb again impacts AX. And I can say that different applications uses tempdb differently and often not following best practice. Mixing OLTP and OLAP load on the same instance or database server, is another classic.
This is some examples of when you need to know how the application uses SQL Server and maybe not something a general DBA would pay attention to without looking into how each application actually utilizes tempdb. Another example is locking and lock escalations in other applications impacting for instance AX. Add databases consuming a considerable amount of CPU in combination with high I/O load.
So in my oppinion it's a great need for application DBA's and AX is a good example since AX is a ERP solution (very often mission critical). I would bet that all customers running AX would see a good return on investment (ROI) hiring an application DBA for AX responsible for pro active maintenance and follow up on queries not performing at the full potential, long running queries etc. But the economic downturn probably impact the willingness of customers spending money on this and this again justifies for calling in people like my self for short term activities (fire fighting). This is a bit of a paradox to me...
Are You an application DBA?My first reflection was that I have been and that I partly still is an application DBA for AX (and also some Axapta solutions). My second thought was that this is an interesting question, since I very often see a total lack of DBA responsibilities and naturally even less DBA's that focus on AX. When I meet someone with a DBA role, it's often in relationship to hosted solutions (outsourcing) and these guys are general DBA's focusing on "the big picture" (at best).
One example was a customer with a consolidated SQL Server solution counting almost 60 user databases supporting the same number of applications. When facing these kinds of situations, the potential for optimizing the AX database is rather small. Take tempdb as an example. AX 4.0 and later benefits from Read Committed Snapshot Isolation as a consequence of introducing versioning and optimistic locking. This again increases the load on tempdb, since the version store are held in tempdb. Since tempdb is a system database shared between every database on the same instance, the total load on tempdb again impacts AX. And I can say that different applications uses tempdb differently and often not following best practice. Mixing OLTP and OLAP load on the same instance or database server, is another classic.
This is some examples of when you need to know how the application uses SQL Server and maybe not something a general DBA would pay attention to without looking into how each application actually utilizes tempdb. Another example is locking and lock escalations in other applications impacting for instance AX. Add databases consuming a considerable amount of CPU in combination with high I/O load.
So in my oppinion it's a great need for application DBA's and AX is a good example since AX is a ERP solution (very often mission critical). I would bet that all customers running AX would see a good return on investment (ROI) hiring an application DBA for AX responsible for pro active maintenance and follow up on queries not performing at the full potential, long running queries etc. But the economic downturn probably impact the willingness of customers spending money on this and this again justifies for calling in people like my self for short term activities (fire fighting). This is a bit of a paradox to me...
Friday, December 4, 2009
AX and virtualization (to do or not)
I have only done one completely virtualized implementation of AX 2009 (including SQL Server 2008 EE x64). This solution is not yet in production, but in the meanwhile I have looked at several other virtualized implementations done by other partners and often operated by another vendor (hosted).
One of them (AX 4.0) consists of 2 AOS servers in a normal AX load-balancing cluster. The interesting part in this setup is that one server is a dedicted, physical server with DAS, while the other is virtualized (ESX 3.5). When talking to the users, they complain about overall performance when having sessions against the AOS instance on the virtualized server. I suspect that the most clever users always looks at the caption in the main AX window to see which AOS server they hit (and maybe they also start one additional session immediately to hit the physical one followed by closing the first session against the virtual). Everything seems ok from the server console (PerMon, TaskMan, Event log etc.), but I also feel (subjective) that the loading of the AX client also takes some more time when hitting the virtualized one.
Again I have to talk about complexity. Ideeally AX AOS should be the perfect candidate to virtualize since it's a CPU, memory and network intensive process. The AOS server will never pay a high physical I/O load and disk I/O is in general low (of cource some batches could impact this in AX 2009). And AX 2009 is now fully supported on several virtualized platforms. So exactly why do I bring in complexity again? As with Storage Area Networks, a virtualized environment is simple to utilize (when it's working as expected), but the technology and the levels of infrastructure behind adds a lot of potential sources when things are not running as expected (most virtualized environments also utilizes a SAN). I guess the odds for getting down to all the needed details is a lot better when implemented at a customer site, but customers nowdays tend to outsource this since "it's not part of their core business". In this scenario, the complexity is very visible since the AX partner (or the consultant doing the performance audit) don't even get all the details or access to the parts of the system necessary for defining the big picture. Seen from the hosting partner, it's all about utilizing the underlying platform as much as possible trying to maximize the potential of the platform. This often means mixing customers on the same platform (logically isolated on every level), but ulitmately sharing the exact same resources at a certain level. This again typically leads to traditional bottlenecks, but they are well hidden for both the customer and the man in the midle.
So what's the lesson here? Always differentiate between hosted solutions and locally implemented ones! The main principles are the same, but it's cruical to bring in the hole army of partners in the planning when the solution is operated by a hosting partner. Without this, you are basically left on your own and you'll probably never be able to see the big picture consisting of all the details, to judge where the reall issues causing the problems are hiding. I guess this is true for every application or solution.
One of them (AX 4.0) consists of 2 AOS servers in a normal AX load-balancing cluster. The interesting part in this setup is that one server is a dedicted, physical server with DAS, while the other is virtualized (ESX 3.5). When talking to the users, they complain about overall performance when having sessions against the AOS instance on the virtualized server. I suspect that the most clever users always looks at the caption in the main AX window to see which AOS server they hit (and maybe they also start one additional session immediately to hit the physical one followed by closing the first session against the virtual). Everything seems ok from the server console (PerMon, TaskMan, Event log etc.), but I also feel (subjective) that the loading of the AX client also takes some more time when hitting the virtualized one.
Again I have to talk about complexity. Ideeally AX AOS should be the perfect candidate to virtualize since it's a CPU, memory and network intensive process. The AOS server will never pay a high physical I/O load and disk I/O is in general low (of cource some batches could impact this in AX 2009). And AX 2009 is now fully supported on several virtualized platforms. So exactly why do I bring in complexity again? As with Storage Area Networks, a virtualized environment is simple to utilize (when it's working as expected), but the technology and the levels of infrastructure behind adds a lot of potential sources when things are not running as expected (most virtualized environments also utilizes a SAN). I guess the odds for getting down to all the needed details is a lot better when implemented at a customer site, but customers nowdays tend to outsource this since "it's not part of their core business". In this scenario, the complexity is very visible since the AX partner (or the consultant doing the performance audit) don't even get all the details or access to the parts of the system necessary for defining the big picture. Seen from the hosting partner, it's all about utilizing the underlying platform as much as possible trying to maximize the potential of the platform. This often means mixing customers on the same platform (logically isolated on every level), but ulitmately sharing the exact same resources at a certain level. This again typically leads to traditional bottlenecks, but they are well hidden for both the customer and the man in the midle.
So what's the lesson here? Always differentiate between hosted solutions and locally implemented ones! The main principles are the same, but it's cruical to bring in the hole army of partners in the planning when the solution is operated by a hosting partner. Without this, you are basically left on your own and you'll probably never be able to see the big picture consisting of all the details, to judge where the reall issues causing the problems are hiding. I guess this is true for every application or solution.
Subscribe to:
Posts (Atom)