Enterprise Application Development With Ext JS and Spring
Enterprise Application Development With Ext JS and Spring
Gerald Gierer
Chapter No.5 "Testing the DAO Layer with Spring and JUnit"
Chapter 4, Data Access Made Easy, introduces the Data Access Object (DAO) design pattern and helps implement a robust data access layer using the domain classes we defined in the previous chapter. Java generics and interfaces, the Simple Logging Facade for Java (SLF4J), the JPA EntityManager, and transactional semantics are also introduced. Chapter 5, Testing the DAO Layer with Spring and JUnit, introduces the configuration of a JUnit testing environment and the development of test cases for several of our DAO implementations. We introduce the Spring Inversion of Control (IoC) container and explore the Spring configuration to integrate Spring-managed JUnit testing with Maven. Chapter 6, Back to Business The Service Layer, examines the role of the service layer in enterprise application development. Our 3T business logic is then implemented by the Data Transfer Objects (DTO) design pattern using Value Objects (VO). We also examine writing test cases prior to coding the implementationa core principle of test-driven development and extreme programming. Chapter 7, The Web Request Handling Layer, defines a request handling layer for web clients that generates JSON data using the Java API for JSON processing, which is a new API introduced in Java EE 7. We implement the lightweight Spring controllers, introduce Spring handler interceptors, and configure Spring MVC using Java classes. Chapter 8, Running 3T on GlassFish, completes our Spring configuration and allows us to deploy the 3T application to the GlassFish 4 server. We also configure the GlassFish 4 server to run independently of the NetBeans IDE, as would be the case in enterprise environments. Chapter 9, Getting Started with Ext JS 4, introduces the powerful Ext JS 4 framework and discusses the core Ext JS 4 MVC concepts and practical design conventions. We install and configure our Ext JS development environment using Sencha Cmd and the Ext JS 4 SDK to generate our 3T application skeleton. Chapter 10, Logging On and Maintaining Users, helps us develop the Ext JS 4 components that are required for logging on to the 3T application and maintaining users. We will discuss the Ext JS 4 model persistence, build a variety of views, examine application concepts, and develop two Ext JS controllers. Chapter 11, Building the Task Log User Interface, continues to enhance our understanding of the Ext JS 4 components as we implement the task log user interface. Chapter 12, 3T Administration Made Easy, enables us to develop the 3T Administration interface and introduces the Ext JS 4 tree component. We examine dynamic tree loading and implement drag-and-drop tree actions.
Chapter 13, Moving Your Application to Production, will help us prepare, build, and deploy our 3T project to the GlassFish server. We introduce Ext JS theming, integrate Sencha Cmd compiling with Maven to automate the Ext JS 4 app-all.js file generation process, and learn how to deploy our production build on the GlassFish server. Appendix, Introducing Spring Data JPA, provides a very brief introduction to Spring Data JPA as an alternative to the implementation discussed in Chapter 4, Data Access Made Easy.
[ 118 ]
Chapter 5
Condence: Developers are reluctant to touch code that is fragile. Well-tested code with solid test cases can be approached with condence. Regression proong: Test cases build and evolve with the application. Enhancements and new functionalities may break the old code silently, but a well-written test suite will go a long way in identifying such scenarios.
Enterprise applications, with many programmers doing parallel development across different modules, are even more vulnerable. Coding side effects may result in far-reaching consequences if not caught early.
A helper method was used to trim a Java String passed in as an argument. The argument was tested for null and the method returned an empty string " " if this was the case. The helper method was used everywhere in the application. One day, a developer changed the helper method to return null if the passed-in argument was null (they needed to identify the difference between null and an empty string). A simple test case would have ensured that this change did not get checked in to version control. The sheer number of null pointer exceptions when using the application was amazing!
[ 119 ]
Note how Maven uses the same directory structure for both source and testing layouts. All resources required to execute test cases will be found in the src/test/ resources directory. Likewise, all the resources required for deployment will be found in the src/main/resources directory. The "convention over conguration" paradigm once again reduces the number of decisions that the developer needs to make. Maven-based testing will work without the need for any further conguration as long as this directory structure is followed. If you do not already have this directory structure, then you will need to create it manually by right-clicking on the required folder:
After adding the directory structure, we can create individual les as shown:
Chapter 5
The jdbc.properties le
Right-click on the test/resources folder and navigate to New | Other. The New File wizard will open where you can select Other from Categories and Properties File as shown:
[ 121 ]
Click on the Finish button to create the jdbc.properties le. NetBeans will then open the le in the editor where you can add the following code:
The jdbc.properties le is used to dene the database connection details that will be used by Spring to congure our DAO layer for unit testing. Enterprise projects usually have one or more dedicated test databases that are prelled with appropriate data for all testing scenarios. We will use the database that was generated and populated in Chapter 2, The Task Time Tracker Database.
The logback.xml le
Create this le by using the New File wizard XML category as shown:
[ 122 ]
Chapter 5
After creating the logback.xml le, you can enter the following content:
<?xml version="1.0" encoding="UTF-8"?> <configuration scan="true" scanPeriod="30 seconds" > <contextName>TaskTimeTracker</contextName> <appender name="STDOUT" class="ch.qos.logback.core. ConsoleAppender"> <encoder> <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} %msg%n</pattern> </encoder> </appender> <logger name="com.gieman.tttracker" level="DEBUG" additivity="false"> <appender-ref ref="STDOUT" /> </logger> <logger name="com.gieman.tttracker.dao" level="DEBUG" additivity="false"> <appender-ref ref="STDOUT" /> </logger> <logger name="com.gieman.tttracker.domain" level="DEBUG" additivity="false"> <appender-ref ref="STDOUT" /> </logger> <logger name="com.gieman.tttracker.service" level="DEBUG" additivity="false"> <appender-ref ref="STDOUT" /> </logger> <logger name="com.gieman.tttracker.web" level="DEBUG" additivity="false"> <appender-ref ref="STDOUT" /> </logger> <root level="INFO"> <appender-ref ref="STDOUT" /> </root> </configuration>
[ 123 ]
For those who are familiar with log4j, the syntax of the logback logger denitions is very similar. We have set the root log level to INFO, which will cover all the loggers that are not explicitly dened (note that the default level is DEBUG but this will usually result in extensive logging at the root level). Each individual logger, with the name matching a com.gieman.tttracker package, is set to log level DEBUG. This conguration gives us considerable exibility and control over package-level logging properties. In production we would normally deploy a WARN level for all loggers to minimize logging. If an issue is encountered, we would then selectively enable logging in different packages to help identify any problems. Unlike log4j, this dynamic reloading of logger properties can be done on the y thanks to logback's scan="true" scanPeriod="30 seconds" option in the <configuration> node. More information about the logback conguration can be found here: http://logback.qos.ch/manual/configuration.html.
The test-persistence.xml le
Follow the New File steps outlined in the previous section to create the testpersistence.xml le. Enter the following persistence context denition:
<?xml version="1.0" encoding="UTF-8"?> <persistence version="2.1" xmlns="http://java.sun.com/xml/ns/ persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http:// java.sun.com/xml/ns/persistence/persistence_2_1.xsd"> <persistence-unit name="tttPU" transaction-type="RESOURCE_LOCAL"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</ provider> <class>com.gieman.tttracker.domain.Company</class> <class>com.gieman.tttracker.domain.Project</class> <class>com.gieman.tttracker.domain.Task</class> <class>com.gieman.tttracker.domain.TaskLog</class> <class>com.gieman.tttracker.domain.User</class> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="eclipselink.logging.level" value="WARNING"/> </properties> </persistence-unit> </persistence>
[ 124 ]
Chapter 5
This persistence unit denition is slightly different from the one created in Chapter 3, Reverse Engineering the Domain Layer with JPA:
<?xml version="1.0" encoding="UTF-8"?> <persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/ persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http:// xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"> <persistence-unit name="tttPU" transaction-type="JTA"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</ provider> <jta-data-source>jdbc/tasktimetracker</jta-data-source> <exclude-unlisted-classes>false</exclude-unlisted-classes> <properties/> </persistence-unit> </persistence>
Note that the testing persistence-unit transaction type is RESOURCE_LOCAL rather than JTA. Our testing environment uses a local (Spring-managed) transaction manager rather than the one provided by our GlassFish server container (which is JTA). In both cases, the tttPU persistence unit name matches the @PersistenceContext unitName annotation of the EntityManager eld in the GenericDaoImpl:
@PersistenceContext(unitName = "tttPU") protected EntityManager em;
The second difference is the way the classes are discovered. During testing our domain entities are explicitly listed and we exclude any classes that are not dened. This simplies processing and ensures that only the required entities are loaded for testing without scanning the classpath. This is an important point for Windows users; on some Windows versions, there's a limit to the length of the command-line statement, and therefore, a limit on how long you can make your classpath argument. Using classpath scanning, the loading of domain entities for testing may not work, resulting in strange errors such as:
org.springframework.dao.InvalidDataAccessApiUsageException: Object: com.tttracker.domain.Company[ idCompany=null ] is not a known entity type.; nested exception is java.lang.IllegalArgumentException: Object: com.tttracker.domain.Company[ idCompany=null ] is not a known entity type.
Always ensure that your testing persistence XML denitions include all domain classes in your application.
[ 125 ]
[ 126 ]
Chapter 5
The testingContext.xml conguration le completely denes the Spring environment required for testing the DAO layer. The full le listing is:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www. springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www. springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/tx http://www. springframework.org/schema/tx/spring-tx-3.0.xsd"> <bean id="propertyConfigurer" class="org.springframework.beans.factory.config. PropertyPlaceholderConfigurer" p:location="classpath:jdbc.properties" /> <bean id="tttDataSource" class="org.springframework.jdbc.datasource. DriverManagerDataSource" p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.url}" p:username="${jdbc.username}" p:password="${jdbc.password}"/> <bean id="loadTimeWeaver" class="org.springframework.instrument. classloading.InstrumentationLoadTimeWeaver" /> <bean id="jpaVendorAdapter" class="org.springframework.orm.jpa. vendor.EclipseLinkJpaVendorAdapter" p:showSql="true" p:databasePlatform="org.eclipse.persistence.platform.database. MySQLPlatform" />
[ 127 ]
Testing the DAO Layer with Spring and JUnit <bean id="entityManagerFactory" class="org.springframework.orm. jpa.LocalContainerEntityManagerFactoryBean" p:dataSource-ref="tttDataSource" p:jpaVendorAdapter-ref="jpaVendorAdapter" p:persistenceXmlLocation="test-persistence.xml" /> <!-- Transaction manager for a single JPA EntityManagerFactory (alternative to JTA) --> <bean id="transactionManager" class="org.springframework.orm.jpa. JpaTransactionManager" p:dataSource-ref="tttDataSource" p:entityManagerFactory-ref="entityManagerFactory"/> <!-- checks for annotated configured beans --> <context:annotation-config/> <!-- Scan for Repository/Service annotations --> <context:component-scan base-package="com.gieman.tttracker.dao" /> <!-- enable the configuration of transactional behavior based on annotations --> <tx:annotation-driven /> </beans>
[ 128 ]
Chapter 5
The list of valid properties for different namespaces is very useful when new to Spring conguration.
The ${} syntax can then be used anywhere in the testingContext.xml le to replace the token with the required jdbc property.
The placeholders are automatically set with the properties loaded from the jdbc. properties le:
jdbc.driverClassName=com.mysql.jdbc.Driver jdbc.url=jdbc:mysql://localhost:3306/task_time_tracker jdbc.username=root jdbc.password=adminadmin
This very simple Spring conguration snippet replaces many lines of equivalent Java code if we had to implement the DataSource instantiation ourselves. Note how simple it would be to change any of the database properties for different testing scenarios, or for example, even change the database server from MySQL to Oracle. This exibility makes the Spring IoC container very powerful for enterprise use. You should note that the org.springframework.jdbc.datasource. DriverManagerDataSource should only be used for testing purposes and is
not for use in a production environment. The GlassFish server will provide a connection-pooled DataSource for production use.
Spring provides a large number of database and JPA implementations as can be seen when using autocomplete in NetBeans (the Ctrl + Space bar combination in NetBeans triggers the autocomplete options):
[ 130 ]
Chapter 5
Helper beans are used to dene implementation-specic properties. It is very easy to swap implementation strategies for different enterprise environments. For example, developers may use MySQL databases running locally on their own environment for development purposes. Production enterprise servers may use an Oracle database running on a different physical server. Only very minor changes are required to the Spring XML conguration le to implement such differences for the application environment.
This denition references the tttDataSource and jpaVendorAdapter beans that are already congured, as well as the test-persistence.xml persistence context denition le. Once again, Spring does a lot of work in the background by creating and conguring the EntityManager instance and making it available for use in our code.
[ 131 ]
This bean wires together the tttDataSource and entityManagerFactory instance to enable transactional behavior in our application. This behavior is applied to all classes with @Transactional annotations; in our current situation this applies to all the DAO objects. Spring scans for this annotation and applies a transactional wrapper to each annotated method when the following line is included in the conguration le:
<tx:annotation-driven />
Which classes are scanned for the @Transactional annotation? The following line denes that Spring should scan the com.gieman.tttracker.dao package:
<context:component-scan base-package="com.gieman.tttracker.dao"/>
Autowiring beans
Autowiring is a Spring term used to automatically inject a resource into a managed bean. The following line enables autowiring in beans that have the @Autowired annotation:
<context:annotation-config/>
We do not have any autowired annotations as of yet in our code; the next section will introduce how this annotation is used.
[ 132 ]
Chapter 5
As enterprise application developers we can and should focus most of our time and energy on core application concerns: business logic, user interfaces, requirements, testing, and, of course, our customers. Spring makes sure we can stay focused on these tasks.
Testing the DAO Layer with Spring and JUnit <version>2.5.0-SNAPSHOT</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.jpa.modelgen. processor</artifactId> <version>2.5.0-SNAPSHOT</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax</groupId> <artifactId>javaee-web-api</artifactId> <version>7.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>${logback.version}</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.26</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-support</artifactId> <version>${spring.version}</version> </dependency> <dependency> [ 134 ]
Chapter 5 <groupId>org.springframework</groupId> <artifactId>spring-tx</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-instrument</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>${spring.version}</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.1</version> <configuration> <source>1.7</source> <target>1.7</target> <compilerArguments> <endorseddirs>${endorsed.dir}</endorseddirs> [ 135 ]
Testing the DAO Layer with Spring and JUnit </compilerArguments> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.3</version> <configuration> <warName>${project.build.finalName}</warName> <failOnMissingWebXml>false</failOnMissingWebXml> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>2.6</version> <executions> <execution> <id>copy-endorsed</id> <phase>validate</phase> <goals> <goal>copy</goal> </goals> <configuration> <outputDirectory>${endorsed.dir}</ outputDirectory> <silent>true</silent> <artifactItems> <artifactItem> <groupId>javax</groupId> <artifactId>javaee-endorsed-api</ artifactId> <version>7.0</version> <type>jar</type> </artifactItem> </artifactItems> </configuration> </execution> <execution> <id>copy-all-dependencies</id> <phase>compile</phase> <goals> <goal>copy-dependencies</goal>
[ 136 ]
Chapter 5 </goals> <configuration> <outputDirectory>${project.build. directory}/lib</outputDirectory> <includeScope>compile</includeScope> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.14.1</version> <configuration> <skipTests>false</skipTests> <includes> <include>**/dao/*Test.java</include> </includes> <argLine>-javaagent:target/lib/spring-instrument${spring.version}.jar</argLine> </configuration> </plugin> </plugins> </build> <repositories> <repository> <url>http://download.eclipse.org/rt/eclipselink/maven. repo/</url> <id>eclipselink</id> <layout>default</layout> <name>Repository for library EclipseLink (JPA 2.1)</name> </repository> </repositories> </project>
The rst two changes add the mysql-connector-java and junit dependencies. Without these we will not be able to connect to the database or write test cases. These dependencies will download the appropriate Java libraries for inclusion into our project.
[ 137 ]
The most important settings are in the Maven plugin that performs the actual work. Adding the maven-surefire-plugin will allow the test case execution based on the contents of the main/src/test directory structure. This clearly separates the testing classes from our application classes. The main conguration properties for this plugin are:
<skipTests>: This property can be true (to disable testing) or false
<includes>: This property includes a list of le sets during testing. The setting <include>**/dao/*Test.java</include> species that all the classes in any dao subdirectory with the lename ending in Test.java should be loaded and included in the testing process. You may specify any number of le sets. <argLine>-javaagent:target/lib/spring-instrument-${spring. version}.jar</argLine>: This property is used to congure the Java
Agent for the testing JVM and is required by Spring for the load-time weaving of classes, a discussion of which is beyond the scope of this text. Now that we have congured the Spring and Maven testing environments, we can start writing test cases.
[ 138 ]
Chapter 5 @Autowired(required = true) protected CompanyDao companyDao; @Autowired(required = true) protected ProjectDao projectDao; @Autowired(required = true) protected TaskDao taskDao; @Autowired(required = true) protected UserDao userDao; @Autowired(required = true) protected TaskLogDao taskLogDao; }
The AbstractDaoForTesting class is marked as abstract so that it cannot be instantiated directly. It provides member variables that are accessible to all the subclasses, thus removing the need to replicate code in the descendents. As a result, each subclass will have access to the DAO instances as well as the SLF4J logger. There are two new Spring annotations:
@ContextConfiguration: This annotation denes the Spring application context used to load the bean container. The testingContext.xml le has been covered in detail in the previous sections.
bean with matching type should be dependency injected into the class. For example, the CompanyDao companyDao denition will result in Spring querying the container for an object with type CompanyDao. There is only one object with this type: the CompanyDaoImpl class that was discovered and congured by Spring when scanning the com.gieman.tttracker.dao package via the <context:component-scan base-package="com.gieman. tttracker.dao"/> entry in the testingContext.xml le.
The nal important thing to notice is that the AbstractDaoForTesting class extends the Spring AbstractTransactionalJUnit4SpringContextTests class. Apart from being a very long class name, this class provides transparent transactional rollbacks at the end of each test method. This means the database state at the end of any DAO testing operations (including any insert, update, or delete) will be the same as at the start of testing. If this behavior is not required, you should extend AbstractJUnit4SpringContextTests instead. In this case any testing database operations can be examined and conrmed after the tests have been run. It is also possible to mark a single method with @Rollback(false) when using AbstractTransactionalJUnit4SpringContextTests to commit changes if required. Let's now write our rst test case for the CompanyDao operation.
[ 139 ]
public class CompanyDaoTest extends AbstractDaoForTesting { public CompanyDaoTest(){} @Test public void testFind() throws Exception { logger.debug("\nSTARTED testFind()\n"); List<Company> allItems = companyDao.findAll(); assertTrue(allItems.size() > 0); // get the first item in the list Company c1 = allItems.get(0); int id = c1.getId(); Company c2 = companyDao.find(id); assertTrue(c1.equals(c2)); logger.debug("\nFINISHED testFind()\n"); } @Test [ 140 ]
Chapter 5 public void testFindAll() throws Exception { logger.debug("\nSTARTED testFindAll()\n"); int rowCount = countRowsInTable("ttt_company"); if(rowCount > 0){ List<Company> allItems = companyDao.findAll(); assertTrue("Company.findAll list not equal to row count of table ttt_company", rowCount == allItems.size()); } else { throw new IllegalStateException("INVALID TESTING SCENARIO: Company table is empty"); } logger.debug("\nFINISHED testFindAll()\n"); } @Test public void testPersist() throws Exception { logger.debug("\nSTARTED testPersist()\n"); Company c = new Company(); final String NEW_NAME = "Persist Test Company name"; c.setCompanyName(NEW_NAME); companyDao.persist(c); assertTrue(c.getId() != null); assertTrue(c.getCompanyName().equals(NEW_NAME)); logger.debug("\nFINISHED testPersist()\n"); } @Test public void testMerge() throws Exception { logger.debug("\nSTARTED testMerge()\n"); final String NEW_NAME = "Merge Test Company New Name"; Company c = companyDao.findAll().get(0); c.setCompanyName(NEW_NAME); c = companyDao.merge(c); [ 141 ]
assertTrue(c.getCompanyName().equals(NEW_NAME)); logger.debug("\nFINISHED testMerge()\n"); } @Test public void testRemove() throws Exception { logger.debug("\nSTARTED testRemove()\n"); Company c = companyDao.findAll().get(0); companyDao.remove(c); List<Company> allItems = companyDao.findAll(); assertTrue("Deleted company may not be in findAll List", !allItems.contains(c) ); logger.debug("\nFINISHED testRemove()\n"); } }
It is also possible to only run the testing phase of the project by navigating to Run | Test Project (task-time-tracker):
[ 142 ]
Chapter 5
The results of the testing process can now be examined in the Output task-timetracker panel. Note that you may need to dock the output panel to the bottom of the IDE if it is minimized, as shown in the following screenshot (the minimized panel is usually in the bottom-left corner of the NetBeans IDE). The [surefire:test] plugin output is displayed at the start of the testing process. There are many lines of output for conguring Spring, connecting to the database, and loading the persistence context:
[ 143 ]
We will examine the key testing output in detail soon. Scroll through the output until you reach the end of the test section:
[ 144 ]
Chapter 5
This will execute the le's test cases, producing the same testing output as shown previously, and present you with the results in the Test Results panel. This panel should appear under the le editor but may not be docked (it may be oating at the bottom of the NetBeans IDE; you can change the position and docking as required). The individual le testing results can then be examined:
Single test le execution is a practical and quick way of debugging and developing code. We will continue to execute and examine single les during the rest of the chapter. Let's now examine the results of each test case in detail.
In all of the following testing outputs, the SLF4J-specic messages have been removed. This will include timestamps, threads, and session information. We will only focus on the generated SQL.
[ 145 ]
A merge call is used to update a persistent entity. The testMerge method is very simple:
final String NEW_NAME = "Merge Test Company New Name"; Company c = companyDao.findAll().get(0); c.setCompanyName(NEW_NAME); c = companyDao.merge(c); assertTrue(c.getCompanyName().equals(NEW_NAME));
We nd the rst Company entity (the rst item in the list returned by findAll) and then update the name of the company to the NEW_NAME value. The companyDao. merge call then updates the Company entity state in the persistence context. This is tested using the assertTrue() test. Note that the testing output only has one SQL statement:
SELECT id_company, company_name FROM ttt_company ORDER BY company_name ASC
This output corresponds to the findAll method call. Note that there is no SQL update statement executed! This may seem strange because the entity manager's merge call should result in an update statement being issued against the database. However, the JPA implementation is not required to execute such statements immediately and may cache statements when possible, for performance and optimization purposes. The cached (or queued) statements are then executed only when an explicit commit is called. In our example, Spring executes a rollback immediately after the testMerge method returns (remember, we are running transactional test cases thanks to our AbstractTransactionalJUnit4SpringContextTests extension), and hence the persistence context never needs to execute the update statement.
GenericDaoImpl class:
[ 146 ]
Chapter 5
The em.flush() method results in an immediate update statement being executed; the entity manager is ushed with all pending changes. Changing this code in the GenericDaoImpl class and executing the test case again will result in the following testing output:
SELECT id_company, company_name FROM ttt_company ORDER BY company_name ASC UPDATE ttt_company SET company_name = ? WHERE (id_company = ?) bind => [Merge Test Company New Name, 2]
The update statement now appears as expected. If we now check the database directly after executing the test case, we nd:
As expected, Spring has rolled back the database at the end of the testMerge method call, and the company name of the rst record has not changed.
In enterprise applications, it is recommended not to call em.flush() explicitly and to allow the JPA implementation to optimize statements according to their transactional behavior. There may be situations, however, where an immediate ush is required but these are rare.
Even though the testMerge method uses the findAll method to retrieve the rst item in the list, we should always include a separate findAll test method to compare the size of the result set with the database table. This is easy when using the Spring helper method countRowsInTable:
int rowCount = countRowsInTable("ttt_company");
We can then compare the size of the findAll result list with rowCount using the assertTrue statement:
assertTrue("Company.findAll list not equal to row count of table ttt_ company", rowCount == allItems.size());
Note how the assertTrue statement is used; the message is displayed if the assertion is false. We can test the statement by slightly modifying the assertion so that it fails:
assertTrue("Company.findAll list not equal to row count of table ttt_ company", rowCount+1 == allItems.size());
It will now fail and result in the following output when the test case is executed:
[ 148 ]
Chapter 5
This may seem a bit surprising for those new to JPA. The SELECT statement is executed from the code:
List<Company> allItems = companyDao.findAll();
But where is the expected SELECT statement when calling the find method using the id attribute?
int id = c1.getId(); // find ID of first item in list Company c2 = companyDao.find(id);
JPA does not need to execute the SELECT statement using the primary key statement on the database as the entity with the required ID has already been loaded in the persistence context. There will be three entities loaded as a result of the findAll method with IDs 1, 2, and 3. When asked to nd the entity using the ID of the rst item in the list, JPA will return the entity it has already loaded in the persistence context with the matching ID, avoiding the need to execute a database select statement. This is often a trap in understanding the behavior of JPA-managed applications. When an entity is loaded into the persistence context it will remain there until it expires. The denition of what constitutes "expires" will depend on the implementation and caching properties. It is possible that small sets of data will never expire; in our Company example with only a few records, this will most likely be the case. Performing an update statement directly on the underlying table, for example, changing the company name of the rst record, may never be reected in the JPA persistence context as the persistence context entity will never be refreshed.
If an enterprise application expects data modication from multiple sources (for example, through stored procedures or web service calls via a different entity manager), a caching strategy to expire stale entities will be required. JPA does not automatically refresh the entity state from the database and will assume that the persistence context is the only mechanism for managing persistent data. EclipseLink provides several caching annotations to solve this problem. An excellent guide can be found here: http://wiki.eclipse.org/EclipseLink/Examples/ JPA/Caching.
[ 149 ]
You will notice the em.flush() method in GenericDaoImpl after the em.persist() method. Without this ush to the database ,we cannot guarantee that a valid primary key has been set on the new Company entity. The output for this test case is:
STARTED testPersist() INSERT INTO ttt_company (company_name) VALUES (?) bind => [Persist Test Company name] SELECT LAST_INSERT_ID() The com.gieman.tttracker.domain.Company record with ID=4 has been inserted FINISHED testPersist()
Note that the logging outputs the newly generated primary key value of 4. This value is retrieved when JPA queries MySQL using the SELECT LAST_INSERT_ID() statement. In fact, removing the em.flush() method from GenericDaoImpl and executing the test case would result in the following output:
STARTED testPersist() The com.gieman.tttracker.domain.Company record with ID=null has been inserted
The assertion assertTrue(c.getId() != null) will fail and we will not even display the FINISHED testPersist() message. Our test case fails before the debug message is reached.
[ 150 ]
Chapter 5
Once again we see the JPA optimization in action. Without the em.flush() method, JPA will wait until a transaction is committed in order to execute any changes in the database. As a result, the primary key may not be set as expected for any subsequent code using the newly created entity object within the same transaction. This is another trap for the unwary developer, and the persist method identies the only situation where an entity manager flush() to the database may be required.
[ 151 ]
Testing the DAO Layer with Spring and JUnit DELETE bind DELETE bind SELECT ASC FROM ttt_project WHERE (id_project = ?) => [5] FROM ttt_company WHERE (id_company = ?) => [2] id_company, company_name FROM ttt_company ORDER BY company_name
FINISHED testRemove()
The rst SELECT statement is executed as a result of nding the rst company in the list:
Company c = companyDao.findAll().get(0);
Why does deleting a company result in a SELECT statement on the ttt_project table? The reason is that each Company entity may have one or more related Projects entities as dened in the Company class denition:
@OneToMany(cascade = CascadeType.ALL, mappedBy = "company") private List<Project> projects;
JPA understands that deleting a Company requires a check against the ttt_project table to see if there are any dependent Projects. In the @OneToMany annotation, the cascade = CascadeType.ALL property denes the behavior if a Company is deleted; the change should be cascaded to any dependent entities. In this example, deleting a company record will require the deletion of all related project records. Each Project entity in turn owns a collection of Task entities as dened in the Project class denition:
@OneToMany(cascade = CascadeType.ALL, mappedBy = "project") private List<Task> tasks;
The result of removing a Company entity has far-reaching consequences as all related Projects and their related Tasks are deleted from the underlying tables. A cascade of DELETE statements in the testing output is the result of the nal deletion being that of the company itself. This may not be suitable behavior for enterprise applications; in fact, such a cascading of deletions is usually never implemented without extensive checks to ensure data integrity. A simple change in the cascade annotation in the Company class will ensure that the deletion is not propagated:
@OneToMany(cascade = {CascadeType.MERGE, CascadeType.PERSIST}, mappedBy ="company") private List<Project> projects; [ 152 ]
Chapter 5
Now only the MERGE and PERSIST operations on the Company entity will be cascaded to the related Project entities. Running the test case again after making this change will result in:
Internal Exception: com.mysql.jdbc.exceptions.jdbc4. MySQLIntegrityConstraintViolationException: Cannot delete or update a parent row: a foreign key constraint fails (`task_time_tracker`.`ttt_ project`, CONSTRAINT `ttt_project_ibfk_1` FOREIGN KEY (`id_company`) REFERENCES `ttt_company` (`id_company`))
As the cascade type for REMOVE was not included, JPA does not check for related rows in the ttt_project table and simply attempts to execute the DELETE statement on the ttt_company table. This will fail, as there are related records on the ttt_project table. It will now only be possible to remove a Company entity if there are no related Project entities (the projects eld is an empty list).
Changing the CascadeType as outlined in this section adds business logic to the DAO layer. You will no longer be able to perform certain actions through the persistence context. There may, however, be a legitimate situation where you do want a cascading delete of a Company entity and this will no longer be possible. CascadeType.ALL is the most exible option, allowing all possible scenarios. Business logic such as deletion strategies should be implemented in the service layer, which is the subject of the next chapter.
We will continue to use the cascade = CascadeType.ALL property and allow JPA-managed deletions to propagate. The business logic to restrict these actions will be implemented in the service layer.
[ 153 ]
Testing the DAO Layer with Spring and JUnit @Test public void testManyToOne() throws Exception { logger.debug("\nSTARTED testManyToOne()\n"); Company c = companyDao.findAll().get(0); Company c2 = companyDao.findAll().get(1); Project p = c.getProjects().get(0); p.setCompany(c2); p = projectDao.merge(p); assertTrue("Original company still has project in its collection!", !c.getProjects().contains(p)); assertTrue("Newly assigned company does not have project in its collection", c2.getProjects().contains(p)); logger.debug("\nFINISHED testManyToOne()\n"); } @Test public void testFindByUsernamePassword() throws Exception { logger.debug("\nSTARTED testFindByUsernamePassword()\n"); // find by username/password combination User user = userDao.findByUsernamePassword("bjones", "admin"); assertTrue("Unable to find valid user with correct username/ password combination", user != null); user = userDao.findByUsernamePassword("bjones", "ADMIN"); assertTrue("User found with invalid password", user == null); logger.debug("\nFINISHED testFindByUsernamePassword()\n"); } } [ 154 ]
Chapter 5
The rst failure arises from the userDao.findByUsernamePassword statement, which uses the uppercase password:
user = userDao.findByUsernamePassword("bjones", "ADMIN");
Why was the user found with an obviously incorrect password? The reason is very simple and is a trap for the unwary developer. Most databases, by default, are case insensitive when matching text elds. In this situation the uppercase ADMIN will match the lowercase admin in the password eld. Not exactly what we want when checking passwords! The database term that describes this behavior is collation; we need to modify the password column to use a case-sensitive collation. This can be achieved in MySQL with the following SQL command:
ALTER TABLE ttt_user MODIFY password VARCHAR(100) COLLATE latin1_general_cs;
Other databases will have similar semantics. This will change the collation on the password eld to be case sensitive (note the _cs appended in latin1_general_ cs). Running the test case will now result in expected behavior for case-sensitive password checking:
[ 155 ]
The testManyToOne failure is another interesting case. In this test case, we are reassigning the project to a different Company. The p.setCompany(c2); line will change the assigned company to the second one in the list. We would expect that after calling the merge method on the project, the collection of projects in the c2 company would contain the newly reassigned project. In other words, the following code line should equate to true:
c2.getProjects().contains(p)
Likewise, the old company should no longer contain the newly reassigned project and hence should be false:
c.getProjects().contains(p)
This is obviously not the case and identies a trap for developers new to JPA. Although the persistence context understands the relationship between entities using @OneToMany and @ManyToOne, the Java representation of the relationship needs to be handled by the developer when collections are concerned. The simple changes required are as follows:
p.setCompany(c2); p = projectDao.merge(p); c.getProjects().remove(p); c2.getProjects().add(p);
When the projectDao.merge(p) line is executed, the persistence context has no way of knowing the original parent company (if there is one at all; this may be a newly inserted project). The original Company entity in the persistence context still has a collection of projects assigned. This collection will never be updated during the lifetime of the Company entity within the persistence context. The additional two lines of code are used to remove the project (using remove) from the original company's project list and we add (using add) the project to the new company to ensure that the persistence context entities are updated to the correct state.
Exercises
1. Add test assertions to the CompanyDaoTest.find() method to test for the following scenarios: Attempting to nd a company with a null primary key Attempting to nd a company with a negative primary key
[ 156 ]
Chapter 5
What do you consider to be the expected results? 2. Create the missing test case les for the ProjectDao, TaskDao, UserDao, and TaskLogDao implementations. 3. Create a test case to determine if removing (deleting) a project will automatically remove the project from the owning company's project collection.
Summary
We have once again covered a lot of territory. Unit testing is a critical part of enterprise application development, and the combination of NetBeans, Maven, JUnit, and Spring provides us with a solid platform to launch both automated and single le test cases. Writing comprehensive test cases is an art form that is always appreciated and valued in any high-quality development team; never underestimate the condence gained from working with well-tested code with a solid suite of test cases! In the next chapter, we will examine the role of the service layer in enterprise application development. Our 3T business logic will then be implemented using the Data Transfer Objects (DTO) design pattern.
[ 157 ]
Alternatively, you can buy the book from Amazon, BN.com, Computer Manuals and most internet book retailers.
www.PacktPub.com