Tuesday, September 04, 2012

SQL Error: ORA-01591: lock held by in-doubt distributed transaction

Well last week I came across this ORA-01591 while running alter command against one of the Oracle DB schema. It simply failed by giving following error. 


SQL Error: ORA-01591: lock held by in-doubt distributed transaction 5.4.426183
01591. 00000 -  "lock held by in-doubt distributed transaction %s"
*Cause:    Trying to access resource that is locked by a dead two-phase commit
           transaction that is in prepared state.
*Action:   DBA should query the pending_trans$ and related tables, and attempt
           to repair network connection(s) to coordinator and commit point.
           If timely repair is not possible, DBA should contact DBA at commit
           point if known or end user for correct outcome, or use heuristic
           default if given to issue a heuristic commit or abort command to
           finalize the local portion of the distributed transaction.

I did Google around and got some help to resolve this. Here are the steps I followed to fix this issue. 
1) Connect to Oracle as sysdba by using this command -- sqlplus sys as sysdba

2) select LOCAL_TRAN_ID from dba_2pc_pending
This above select command will give you list of all pending transaction ids and one id will match with the id mentioned in the above error. you have to pick that id and delete it. Before attempting to delete (step 5) you need to first execute steps 3 and 4. 

3) alter system enable distributed recovery;
   Above statement enables distributed recovery

4) rollback force '5.4.426183';
    commit;
 Note - Use ROLLBACK statement with the FORCE option and a text string that indicates either the local or global transaction ID of the in-doubt transaction to commit.

5) execute dbms_transaction.purge_lost_db_entry('5.4.426183');

Once the above procedure executes successfully then you may try your original Alter/DDL command and restart your app server.

Friday, August 10, 2012

Purging Asset Versions in BCC using ATG dynamo PurgingService


Recently faced couple of issues with running the Purge Service under BCC and had to do some tuning to finally make it work on large volume of versioned assets.
It is generally a good idea to periodically purge versioned repository data of old projects and asset versions. Over the period of  time, the versioning system in Content Administration (CA) can accumulate a large number of asset versions and completed projects. As asset versions accumulate, they can strain storage capacity and system performance. It also becomes hard to take live data copy and replicate to other environments.


The length of a purge depends on the number of repository and file assets that need to be purged. A purge that has  large number assets can be lengthy specially for the first time. Try scheduling multiple purges in this case.
Its also a good idea to take back up of all affected datastores and file systems before you start a purge.

Purge Service generates a Summary Metrics report before starting the purge activity and another one after the purge is completed.
The report basically has details like number of projects and asset versions removed, and the number of projects and asset versions that remain. It has a good amount of details as what is going to be purged and what will remain after that.

You might like to do few changes in your BCC server instance to run the purge service as its likely that purge will fail initially due to various reasons. Worth trying these -

(1) Purge operation executes in a transaction. If a purge has a large number of assets, you might need to raise your application server’s transaction timeout setting—for JBoss reset TransactionTimeout attribute        (in /server/yourserver/conf/jboss-service.xml file).

(2) JVM memory setting in Jboss - It depends on volume of data you have. If you get any memory error while performing Purge activity then consider increasing memory by 1 GB and keep increasing till the time memory error goes off. In ideal condition going upto 6 GB would be good enough. (-Xms6144m -Xmx6144m) under Jboss/bin/run.bat on Windows.

(3) Resolving repository data conflict - Purge service might failed on ContentRepository data, to fix this try setting VersionManagerService.enableProtectivePurge as false and then run the purge service.

For more details read Oracle ATG CA docs here –

Wednesday, April 18, 2012

Effective use of robots.txt

Off late I did some work on SEO and got an opprtunity to play around the robots.txt file to apply various rules. Would like to share the common understanding around it, feel free to provide your comments or share your experiences.

As part of sensible SEO practice its important to keep a firm grasp on managing exactly what information we don't want being crawled!
A robots.txt file restricts access to your site by search engine robots that crawl the web. These bots are automated and before they access pages of a site, they check to see if a robots.txt file exists that prevents them from accessing certain pages.
You need a robots.txt file only if your site includes content that you don't want search engines to index. If you want search engines to index everything in your site, you don't need a robots.txt file

The simplest robots.txt file uses two rules:
User-agent: the robot the following rule applies to
Disallow: the URL you want to block

These two lines are considered a single entry in the file. You can include as many entries as you want. You can include multiple Disallow lines and multiple user-agents in one entry.

Some example below -

User-agent: *
Disallow: /images/

User-Agent: Googlebot
Disallow: /archive/

The Disallow line lists the pages you want to block. You can list a specific URL or a pattern. The entry should begin with a forward slash (/).

  • To block the entire site, use a forward slash.
Disallow: /

  • To block a directory and everything in it, follow the directory name with a forward slash.

Disallow: /archive-directory/

  • To block a page, list the page.

Disallow: /checkout.jsp

  • To remove a specific image from Google Images, add the following:

User-agent: Googlebot-Image
Disallow: /images/logo.jpg

  • To remove all images on your site from Google Images:

User-agent: Googlebot-Image
Disallow: /

  • To block files of a specific file type (for example, .gif), use the following:

User-agent: Googlebot
Disallow: /*.gif$

  • To specify matching the end of a URL, use $. For instance, to block any URLs that end with .xls:

User-agent: Googlebot
Disallow: /*.xls$

We can restrict crawling where it's not needed with robots.txt
A "robots.txt" file tells search engines whether they can access and therefore crawl parts of your site. This file, which must be named "robots.txt", is placed in the root directory of your site. e.g - www.example.com/robots.txt

If you have a multi country site then each country should have its own robots.txt
To further read follow these links for generating and using robots.txt

robots.txt generator
Using robots.txt files
Caveats of each URL blocking method

Kindly note Google has a limit of only being able to process up to 500KB of your robots.txt file.

Thursday, November 25, 2010

11 most commonly used FormHandlers in ATG

Here are some of the commonly used FormHandlers, although ATG provides many out of the box FormHandlers and even we can write our own custom FormHandlers by extending GenericFormHandler class provided by ATG.

CartModifierFormHandler -- This formhandler is used to modify a ShoppingCart by adding items to it, deleting items from it, modifying the quantities of items in it, and preparing it for the checkout process.

ExpressCheckoutFormHandler -- The ExpressCheckoutFormHAndler is used to expedite the checking out of an Order. This supports creating a maximum of one Profile derived HardgoodShippingGroup and one Profile derived CreditCard, followed by committing the Order.

SaveOrderFormHandler -- The SaveOrderFormHandler is used to save the user's current Order based on a descriptive name that the user specifies. A new empty Order is then made the user's current shopping cart. If a descriptive name for the Order is not specified, then one is created based on the user's Locale and date and time.

ShippingGroupFormHandler -- The ShippingGroupFormHandler is used to associate ShippingGroups with the various Order pieces. This component is used during the Order checkout process, and any Order successfully processed by the ShippingGroupFormHandler is ready for the next checkout phase, which may be Payment.

PaymentGroupFormHandler -- The PaymentGroupFormHandler is used to associate PaymentGroups with the various Order pieces. This component is used during the Order checkout process, and any Order successfully processed by the PaymentGroupFormHandler is ready for the next checkout phase, which may be confirmation.

CommitOderFormHandler -- The CommitOrderFormHandler is used to submit the Order for final confirmation. This calls the OrderManager's processOrder method. If there are no errors with processing the Order, then the current Order in the user's ShoppingCart will be set to null and the submitted Order will become the ShoppingCart's last Order.

CancelOderFormHandler -- The CancelOrderFormHandler is used to cancel the user's current Order, which deletes the Order from the ShoppingCart.

RepositoryFromHandler -- Saves repository data to a database.

ProfileFormHandler -- Connects forms with user profiles stored in a profile repository.

SearchFormHandler -- Specifies properties available to a search engine.

SimpleSQLFormHandler -- Works with form data that is stored in a SQL database.

Saturday, November 20, 2010

ATG's Data Anywhere Architecture

Just read one of the white papers from ATG's site , liked the way they have explained about Data Anywhere Architecture. Thought to share the important points about it. Here we go :-)

Challenges --
All enterprise applications need to access and manipulate data in some shape or form. Common challenges that have been found in building scalable, object-oriented, data-centric applications are:

Object-to-relational mapping – Issues surrounding how a relational data representation can be appropriately mapped to an object oriented programming language in a way that does not impact simplicity or data integrity.

Data source insulation – Issues surrounding the fact that relational/SQL database may not be the only form of data that the application requires. Other data source types may include LDAP directories or XML file-based assets.

Data caching – Issues surrounding the appropriate use of data resources without inflicting the high volumes of data source ‘hits’ common in high-traffic Web applications. Caching issues include the integrity and cache invalidation of the data used throughout a distributed application.

Solutions --
ATG’s Data Anywhere Architecture (DAA) meets all of these challenges. DAA gives developers a single API, called the Repository API, for using data resources in their applications. Behind the Repository API, DAA insulates and abstracts application developers from the specifics of the data source, so much so that the nature of the underlying data source may completely change without major impact. For example, customer data may reside in a SQL/JDBC database today, but will move to an LDAP directory in the future. DAA could handle this without having to touch any code within the application. The fundamental construct in the DAA is a ‘Repository’. A Repository is a logical view of a data resource (or resources), and to a developer, manifests itself as a set of JavaBeans to be used within their application. Like everything else in an ATG application, Repositories are represented as Nucleus components.










The Repository is described in a Repository Definition XML file, which holds all appropriate information about the data’s physical location and how it is mapped to the logical view. The DAA consists of three primary Repository types for data access and manipulation.

SQL repository – A SQL Repository presents a logical view of data stored in a relational database, accessed through JDBC. The Repository definition file defines how the databases, tables, rows, and columns of a relational database are presented through the Repository API. It also defines the item caching strategy to be employed to optimize database read/write performance.

LDAP repository – An LDAP Repository presents a logical view of any data source that has an LDAP interface, typically used to store user data. The Repository definition file defines how the hierarchical structure and contents of an LDAP directory are presented through the Repository API.

Integration repository – In some cases, data sources may not be directly accessible, or information may be returned by an application rather than directly from a database or directory service. The Integration Repository presents a Repository API in front of some remote application processing. For example, an Integration Repository may be implemented to facilitate integration with SAP’s BAPI interface, or to retrieve information through the execution of a Web Service SOAP call. The Integration Repository is an open architecture into which these specific integration technologies can be plugged, while still presenting the same Repository API to the application developer. It also gives developer sophisticated data access and caching strategies to protect the application from remote latency and downtime.
In addition to the primary types of Repository mentioned so far, there are two types of ‘overlay’ repository types that can be used.

Secure repository – A Secure Repository introduces application level security and access control to the data being accessed and manipulated. Working with ATG’s Security Management Framework, varying levels of security can be defined on the Repository contents, all the way down to individual data properties. Access Control Lists (ACLs) are written to describe the different levels of access that are provided to ATG’s User Model, which itself provides a rich structure to model user, organizational hierarchies and roles.

Versioned repository – A Versioned Repository introduces a versioning mechanism to one of the other primary Repository types. It provides all of the required tools to maintain, version and roll-back versions of a Repositories contents. Any existing SQL Repository may be turned into a Versioned Repository through additional configuration files. The Versioned Repository architecture is heavily used by ATG’s Content Administration product, but the versioning features are open for any other type of application usage that may be customer specific. Versioned Repositories integrate closely with ATG’s workflow capabilities that reside in the ATG Adaptive Scenario Engine. A Composite Repository is the final construct that can be especially useful for building applications requiring access to data in multiple data sources and formats.

Composite repository – A Composite Repository represents an aggregate view over other repository types, or over other composite Repositories (although one should not create too many layers of Composite Repository). The most common use of a Composite Repository is where a businesses customer data is distributed over multiple SQL databases and an LDAP directory, but a Web application wants a ‘single view of the customer’ to reduce application complexity.

A Composite Repository provides some welcome simplicity.








To ensure scalability of Web site usage of SQL database, the DAA provides sophisticated caching and cache invalidation mechanisms for SQL Repositories.

DAA provides all of the necessary tools to manage and flush caches at the repository item level. There are also mechanisms for managing distributed caches

and cache invalidation via JMS or TCP/IP. All in all, ATG’s Data Anywhere Architecture provides a rich, robust, and highly scalable set of tools to facilitate the use of enterprise data resources, while providing a loose coupling approach between data source and application.

If you want to further explore features of ATG repository or may like to compare it with Hibernate then follow the below link which points to ATG community site - https://community.atg.com/docs/DOC-1894

Popular Posts