Thursday, November 25, 2010

11 most commonly used FormHandlers in ATG

Here are some of the commonly used FormHandlers, although ATG provides many out of the box FormHandlers and even we can write our own custom FormHandlers by extending GenericFormHandler class provided by ATG.

CartModifierFormHandler -- This formhandler is used to modify a ShoppingCart by adding items to it, deleting items from it, modifying the quantities of items in it, and preparing it for the checkout process.

ExpressCheckoutFormHandler -- The ExpressCheckoutFormHAndler is used to expedite the checking out of an Order. This supports creating a maximum of one Profile derived HardgoodShippingGroup and one Profile derived CreditCard, followed by committing the Order.

SaveOrderFormHandler -- The SaveOrderFormHandler is used to save the user's current Order based on a descriptive name that the user specifies. A new empty Order is then made the user's current shopping cart. If a descriptive name for the Order is not specified, then one is created based on the user's Locale and date and time.

ShippingGroupFormHandler -- The ShippingGroupFormHandler is used to associate ShippingGroups with the various Order pieces. This component is used during the Order checkout process, and any Order successfully processed by the ShippingGroupFormHandler is ready for the next checkout phase, which may be Payment.

PaymentGroupFormHandler -- The PaymentGroupFormHandler is used to associate PaymentGroups with the various Order pieces. This component is used during the Order checkout process, and any Order successfully processed by the PaymentGroupFormHandler is ready for the next checkout phase, which may be confirmation.

CommitOderFormHandler -- The CommitOrderFormHandler is used to submit the Order for final confirmation. This calls the OrderManager's processOrder method. If there are no errors with processing the Order, then the current Order in the user's ShoppingCart will be set to null and the submitted Order will become the ShoppingCart's last Order.

CancelOderFormHandler -- The CancelOrderFormHandler is used to cancel the user's current Order, which deletes the Order from the ShoppingCart.

RepositoryFromHandler -- Saves repository data to a database.

ProfileFormHandler -- Connects forms with user profiles stored in a profile repository.

SearchFormHandler -- Specifies properties available to a search engine.

SimpleSQLFormHandler -- Works with form data that is stored in a SQL database.

Saturday, November 20, 2010

ATG's Data Anywhere Architecture

Just read one of the white papers from ATG's site , liked the way they have explained about Data Anywhere Architecture. Thought to share the important points about it. Here we go :-)

Challenges --
All enterprise applications need to access and manipulate data in some shape or form. Common challenges that have been found in building scalable, object-oriented, data-centric applications are:

Object-to-relational mapping – Issues surrounding how a relational data representation can be appropriately mapped to an object oriented programming language in a way that does not impact simplicity or data integrity.

Data source insulation – Issues surrounding the fact that relational/SQL database may not be the only form of data that the application requires. Other data source types may include LDAP directories or XML file-based assets.

Data caching – Issues surrounding the appropriate use of data resources without inflicting the high volumes of data source ‘hits’ common in high-traffic Web applications. Caching issues include the integrity and cache invalidation of the data used throughout a distributed application.

Solutions --
ATG’s Data Anywhere Architecture (DAA) meets all of these challenges. DAA gives developers a single API, called the Repository API, for using data resources in their applications. Behind the Repository API, DAA insulates and abstracts application developers from the specifics of the data source, so much so that the nature of the underlying data source may completely change without major impact. For example, customer data may reside in a SQL/JDBC database today, but will move to an LDAP directory in the future. DAA could handle this without having to touch any code within the application. The fundamental construct in the DAA is a ‘Repository’. A Repository is a logical view of a data resource (or resources), and to a developer, manifests itself as a set of JavaBeans to be used within their application. Like everything else in an ATG application, Repositories are represented as Nucleus components.










The Repository is described in a Repository Definition XML file, which holds all appropriate information about the data’s physical location and how it is mapped to the logical view. The DAA consists of three primary Repository types for data access and manipulation.

SQL repository – A SQL Repository presents a logical view of data stored in a relational database, accessed through JDBC. The Repository definition file defines how the databases, tables, rows, and columns of a relational database are presented through the Repository API. It also defines the item caching strategy to be employed to optimize database read/write performance.

LDAP repository – An LDAP Repository presents a logical view of any data source that has an LDAP interface, typically used to store user data. The Repository definition file defines how the hierarchical structure and contents of an LDAP directory are presented through the Repository API.

Integration repository – In some cases, data sources may not be directly accessible, or information may be returned by an application rather than directly from a database or directory service. The Integration Repository presents a Repository API in front of some remote application processing. For example, an Integration Repository may be implemented to facilitate integration with SAP’s BAPI interface, or to retrieve information through the execution of a Web Service SOAP call. The Integration Repository is an open architecture into which these specific integration technologies can be plugged, while still presenting the same Repository API to the application developer. It also gives developer sophisticated data access and caching strategies to protect the application from remote latency and downtime.
In addition to the primary types of Repository mentioned so far, there are two types of ‘overlay’ repository types that can be used.

Secure repository – A Secure Repository introduces application level security and access control to the data being accessed and manipulated. Working with ATG’s Security Management Framework, varying levels of security can be defined on the Repository contents, all the way down to individual data properties. Access Control Lists (ACLs) are written to describe the different levels of access that are provided to ATG’s User Model, which itself provides a rich structure to model user, organizational hierarchies and roles.

Versioned repository – A Versioned Repository introduces a versioning mechanism to one of the other primary Repository types. It provides all of the required tools to maintain, version and roll-back versions of a Repositories contents. Any existing SQL Repository may be turned into a Versioned Repository through additional configuration files. The Versioned Repository architecture is heavily used by ATG’s Content Administration product, but the versioning features are open for any other type of application usage that may be customer specific. Versioned Repositories integrate closely with ATG’s workflow capabilities that reside in the ATG Adaptive Scenario Engine. A Composite Repository is the final construct that can be especially useful for building applications requiring access to data in multiple data sources and formats.

Composite repository – A Composite Repository represents an aggregate view over other repository types, or over other composite Repositories (although one should not create too many layers of Composite Repository). The most common use of a Composite Repository is where a businesses customer data is distributed over multiple SQL databases and an LDAP directory, but a Web application wants a ‘single view of the customer’ to reduce application complexity.

A Composite Repository provides some welcome simplicity.








To ensure scalability of Web site usage of SQL database, the DAA provides sophisticated caching and cache invalidation mechanisms for SQL Repositories.

DAA provides all of the necessary tools to manage and flush caches at the repository item level. There are also mechanisms for managing distributed caches

and cache invalidation via JMS or TCP/IP. All in all, ATG’s Data Anywhere Architecture provides a rich, robust, and highly scalable set of tools to facilitate the use of enterprise data resources, while providing a loose coupling approach between data source and application.

If you want to further explore features of ATG repository or may like to compare it with Hibernate then follow the below link which points to ATG community site - https://community.atg.com/docs/DOC-1894

Tuesday, October 05, 2010

ATG Dynamo Application Framework (DAF) and Nucleus

Hope most of you must be knowing about Dynamo Application Framework (DAF) and Nucleus concepts in ATG . I have just tried to summarized the basic idea about it which I read mostly from ATG white papers. Hope it will be useful for someone who is new to the world of ATG. Any comments or suggestions are most welcome. Would try to write more practical examples about dynamo and commerce related topics.

The ATG Dynamo Application Framework is an application framework designed to help simplify the creation of Web applications. It provides a large number of the common services, components, and frameworks that an application developer needs when building highly scalable, feature-rich, enterprise Web applications. DAF provides three core ‘pillars’ to help a developer construct an application.

Component model – Any software application requires a component model that provides structure and coherence to an application. The component model used by DAF is JavaBeans, managed in a component container called the Nucleus (read about it in below section).

Data access model – All recent Web applications require information access and an ability to manipulate data. DAF’s data access and manipulation model is called the Data Anywhere Architecture.

Messaging model – Responsive applications require a messaging architecture that allows events to be fired and appropriate actions to execute on the occurrence of those events elsewhere in the system. The JMS messaging model implemented by DAF is managed by a service called the Patch Bay.

Although not classified as a ‘main pillar,’ the user interface (UI) programming model by which these previously mentioned elements can be used is also important, and an area where ATG has innovated ahead of the general market. DAF uses JavaBeans as the primary, lightweight, component model. These JavaBean components are configured and linked together by .properties files within Nucleus. The DAF application framework can also be run on all major J2EE application servers (JBoss, WebLogic, WebSphere etc.).


Nucleus is a ‘light weight’, yet feature-rich component model. It adheres to the “Inversion of Control” design pattern, whereby software components are discrete entities coupled together by the Nucleus container, rather than through direct reference. The services and structure provided by Nucleus makes building Java applications much simpler than when starting with the base set of Java and J2EE services. It promotes good interface-based programming principles and helps application developers take a modular approach, resulting in more modularized, maintainable, and understandable applications.

Nucleus is DAF’s component namespace for building applications from JavaBeans. Nucleus allows the assembly of applications through simple configuration files that specify what components are used by the application, what properties they should have, and how components hook together. Nucleus itself provides no application-specific functionality, since it is only a container in which components live, discover, and interact with each other. It is the collection of components that make up the functionality of an overall application. Nucleus organizes these application components into a hierarchical namespace.

A lot of what makes Nucleus special is encapsulated in the following core areas:

Component creation and administration – Nucleus provides a simple way to write new components. It is a simple process to take any Java object and allow it to act as a component in Nucleus. Nucleus takes on the task of creating and initializing components. A very useful aspect of Nucleus is that applications don’t need to contain code to create instances of components. Instead, components can be created and administered through configuration files that specify the component and the initial values of its properties. If needed, administrators can alter the properties of ‘live’ components within the application. The component instances are then initialized automatically at start up time, rather than programmatically created. Nucleus employs a ‘lazy instantiation’ policy for creating components. One component is only created at the point it is referenced by another.

Component layering and combination – Nucleus provides a convenient way to modify and extend component properties by organizing configuration files into layers. This layering allows application developers to add new components or override the settings of existing ones without modifying the configuration files shipped by ATG. Nucleus automatically combines the layers at application start-up. These layers are organized into ‘modules’ so the associated Java class files can be maintained with the configuration files, simplifying application maintenance and upgrade.

Component scoping – To further increase its usefulness as a component model for Web applications, Nucleus makes it very easy for application developers to set the scope of their components. The scope can be set to ‘global’, ‘session’, or ‘request’,. Nucleus takes care of how the components are managed so that developers do not have to do any specific coding.

Nucleus includes a large number of out-of-the-box generalized service components that can be used in many ways within an application. Service components include TCP Request Services, Scheduler Services, ID Generation Services, Resource Pools, Queues, Email Senders and Listeners, and many more. Each Nucleus service has a unique Nucleus name. For example, the default javax.sql.DataSource component is located at /atg/dynamo/service/jdbc/JTDataSource

It is clear that emerging models typically tackle one or the other element of DAF but not everything. Hibernate tackles the data access problem, Struts tackles the UI development problem, etc. It is often left to the application developer to figure out how these different initiatives are used together.





Friday, April 16, 2010

playing around equals() and hashCode() methods of Object class

Why, when & how of overriding equals() and hashCode() methods of Object class

As we know the Object class has five non final methods namely equals, hashCode, toString, clone, and finalize.
I believe they were primarily designed to be overridden according to specific needs.
I am just trying to summarize my understanding around these two methods. When should we override these and how we should implement these etc. Any comments/discussion/feedback would be highly appreciated on above topics.

Overriding the equals method -
Ideally you should override the equals method when you want to specify the rules of logical equality of objects. Two objects are logically equal if they have the same values for the same uniqueness attributes.

We need to implement an equivalence relation between non null object references. The rules to override equals method can be found on sun's site. They basically talk about Symmetry , Reflectivity Consistency & Transitivity among the objects.

The rule for the null reference is that any non-null reference value x, x.equals(null) must return false.

Object class already provides an implementation of the equals method. It is
public boolean equals(Object obj) {
return (this == obj);
}

The method above simply tests for equality of object references. This is not always the desired behavior, particularly when comparing Strings. That's why the String class provides its own implementation of the equals method.

For example suppose we have a Student class which has a constructor like this.
public Student(String name, String id, int age) { …}

Now if you create two student objects with the same attributes, you'd want those two objects to be the same student. If you write a code like below then you will get false as result.

Student student1 = new Student("sajid", "456789", 28);
Student student2 = new Student("sajid", "456789", 28);
System.out.println(student1.equals(student2));

Result is false because student1 and student2 are different references and the equals method being used here (from the object class) is comparing references. So here we need to override the equals method to check for our uniqueness attributes for our comparisons to work.

Caution: we should not do a mistake of overloading the equals method instead of overriding it. So the argument to the equals method must be an Object.

See the difference between below two statements. You will understand it.
public boolean equals(Object obj)
public boolean equals(Student student)

A good implementation of equals should start with symmetry test. Means x.equals(x) should always return true. If this it self is failing then no need to proceed further.

Another important point would be to test the instance of the object passed to equals() method. People might pass a String object instead of Object in the argument. Here instanceof operator helps us to check this.

instanceof also eliminates the trivial cases when the object passed is null because it returns false if the first argument is null.

So whole implementation might look like this …

public boolean equals(Object obj) {
if (this == obj) {
return true;
}

if ( !(obj instanceof Student)) {
return false;
}

Student student = (Student)obj;
return age == student.getAge() && name.equals(student.getName())
&& id.equals(student.getId());

}

Now System.out.println(student1.equals(student2)); will print true.

Food for thought -
Note that we deliberately compared the ages (integers) first. The && operator has short circuit behavior, meaning that if the age comparison fails the rest of the comparison is abandoned and false is returned. It is therefore a performance advantage to have the cheaper (memory wise) tests first and the more memory demanding tests last.

Is there any situation when we should not override equals() ?
Yes. When the references check is sufficient. This is when each instance of the class is unique. Other situation could be when parent class already has implemented the desired behavior then we need not bother.

Now whenever we override the equals method, we must also override the hashCode method.
So let's move ahead to hashing …


Overriding the hashCode method -
Why hashcode ? Simple the hashCode method is supported for the benefit of hash based collections. Basically this hash code value is used by hash based collections such as Hashtable, HashMap, HashSet, etc. for storing, retrieving, sorting, and other data structure operations.

The contract says that If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result. It also says that the hashCode method must consistently return the same integer, provided no information used in equals comparisons on the object is modified.

It further says that It is not required that if two objects are unequal according to the equals method, then calling the hashCode method on each of the two objects must produce distinct integer results.

So equal objects must have equal hashCodes. An easy way to ensure that this condition is always satisfied is to use the same attributes used in determining equality in determining the hashCode. Now we should realize why it is important to override hashCode every time we override equals.

The story of hashtable and buckets .. (Some useful history) -
we can think a hash table as a group of buckets. When you add a key-value pair, the key's hashCode is used to determine which bucket to put the mapping.
Similarly when you call the get method with a key on the hash table, the key's hashCode is used to determine in which bucket the mapping was stored. This is the bucket that is searched (sequentially) for the mapping.
If you have two "equal" objects but with different hashCodes, then the hash table will see them as different objects and put them in different buckets. Similarly you can only retrieve an object from a hash table by passing an object with the same hashCode as the object you are trying to retrieve. If no matching hashCode is found, null is returned.
So let's say it again, "Equal objects must have equal hashCodes".

The best hashcode approach -
We should try to make all unequal objects have unequal hashCodes. This means each mapping is stored in its own bucket. This is the optimal case for the hash table and results in linear search times because only the correct bucket needs to be searched for. Once the correct bucket is found, the search is complete. That's why the API docs said

"However, the programmer should be aware that producing distinct integer results for unequal objects may improve the performance of hash tables."

How do I implement hashCode ?
Well we want it to be linked to the equals method in some way and so it must use the same attributes as the equals method.

The hashCode method signature in Object class is -
public native int hashCode();

The key thing to note here is that the method returns an integer. This means that we should try to get an integer representation of all the attributes that were used to determine equality in the equals method. The trick is that we should get this integer representation in a way that ensures that we always get the same int value for the same attribute value.
Once we have the integers, it's up to us to find a way of combining them into one integer that represents the hashCode for our object.

Whatever algorithm we use, we must make sure that the result is always an integer and will be the same integer returned for equal objects.
So how do we determine the hashCodes for the attributes themselves?

For the individual attributes values, you can use the following popular approach
(Source: http://bytes.com/topic/java/insights/723476-overriding-equals-hashcode-methods )
  • For boolean variables use 0 if it's true and 1 if it's false.
  • Converting byte, char or short to int is easy. Just cast to int. The result is always the same for the same value of the attribute.
  • A long is bigger than an int. You can use (int)value^(value >>> 32) . This is the method used by the java.lang.Long class.
  • If the field is a float, use Float.floatToIntBits(value).
  • If the field is a double, use Double.doubleToLongBits(value), and then hash the resulting long using the method above for long type.
  • If the field is an object reference and this class's equals method compares the field by recursively invoking equals, then recursively invoke hashCode on the field as well.
  • If the value of the field is null, return 0 (or some other constant, 0 is more common but you might want to distinguish it from the boolean case).
  • Finally, if the field is an array, go through each element and compute each element's hashCode value. Use the sum of the hashCodes as the hashCode for the array attribute.

Let's look at some ways of doing it.
A common approach is to choose a multiplier, say p, and then compute an int value by applying the following formula
hashCode = multiplier * hashCode + attribute's hashCode for all the attributes.

For three atributes (a1, a2, a3), the hashCode would be computed in the following steps
hashCode = multiplier * hashCode + a1's hashCode //step 1
hashCode = multiplier * hashCode + a2's hashCode //step 2
hashCode = multiplier * hashCode + a3's hashCode //step 3

Now putting it all together our Student class will look like this. .

public class Student {
String id;
String name;
int age;
private volatile int hashCode = 0;

public Student(String name , String id, int age) {
this.name = name;
this.id = id;
this.age = age;
}

String getName() {
return name;
}

int getAge() {
return age;
}

String getId() {
return id;
}

public boolean equals(Object obj) {
if(this == obj) {
return true;
}
if (!(obj instanceof Student)) {
return false;
}
Student student = (Student)obj;
return age == student.getAge() && name.equals(student.getName())
&& id.equals(student.getId());

}

public int hashCode () {
final int multiplier = 23; // we should use any prime number here like 23 or 31 etc
if (hashCode == 0) {
int code = 133;
code = multiplier * code + age;
code = multiplier * code + id.hashCode();
code = multiplier * code + name.hashCode();
hashCode = code;
}
return hashCode;
}

}

To test we can use a main() method in the above class like this -

public static void main(String args[]){
Student student1 = new Student("sajid", "456789", 28);
Student student2 = new Student("sajid", "456789", 28);
System.out.println(student1.equals(student2)); // prints true
}

Another way to implement hashCode would be
Similar to how we implemented hashCode our hash code calculation must involve all attributes of the class that contribute to the equality comparison of the class. For Car class example, the following would work:
public int hashCode()
{
int hash = 7;
hash = 31 * hash +
(null == this.licensePlate ? 0 : this.licensePlate.hashCode());
hash = 31 * hash +
(null == this.vinNumber ? 0 : this.vinNumber.hashCode());

return hash;
}

Java provides an implementation of hashCode for built-in classes such as String and other wrapper classes. If you have an int data type that is part of your equals comparison you can simply add, subtract, multiply, or divide it into your hash code value.
Several APIs demand that the user must implement the hashCode() method. The reason is that these APIs are using hash based containers (like HashMap) to have a fast means of managing lots of objects (always comparing objects using equals() would need endless time).

I will try to further dig into the internals of hashing and difference between different hash based collections and the best performance strategies around it.

Wednesday, January 27, 2010

Lets come forward

I don't know but it really gives me eternal happiness to help people around and i feel that's the way to live.
we should always be concerned about others around. in recent times i have come across quite some statements , calls , adds , interview which attracts me to come forward for such good cause.

I came across -
CHILDLINE (1098) -- Country's first toll-free tele-helpline for street children in distress. just dail 1098.
WorldVision -- Here you can sponsor a child across the world , you can donate money and much more.

I am trying to spread the word here and would like to get involved with them. i think your contribution is important no matter big or small.
I also feel that these organizations need to come forward and bring greater transparency to common people. there is help available but most of the time people dont know about it.
I request you all to share your experiences and association if any with any such organizations/NGOs.

Wednesday, January 20, 2010

The brand Sapient

Its been more than a year now at Sapient , i still remember few people used to tell me don't change company ,its not a good time(at end of 2008) , few told that you cant even survive 6 months at sapient due to slogging etc.. but i feel i made a right move and joined the right company.
I can say its my kind of place. some of the things I really enjoy here is the open culture and unlimited freedom.
Sapient is very unique in its own way. we follow different kind of processes here to make things work better.

Some of the terms i learned/heard being at sapient (call it sapient lingo)--
Retrofit
Caveats
Sync-up
Gated
Head-on
Triage

I wanna thank Rupinder for providing me an opportunity to join Sapient in the worst of economic conditions way back in Oct,2008.
i am lucky indeed :-)

Popular Posts