Creating Advanced Mappings and Custom Data Types - Persisting Data with JPA and Hibernate ORM - PROFESSIONAL JAVA FOR WEB APPLICATIONS (2014)

PROFESSIONAL JAVA FOR WEB APPLICATIONS (2014)

Part III Persisting Data with JPA and Hibernate ORM

Chapter 24 Creating Advanced Mappings and Custom Data Types

IN THIS CHAPTER

· Why convert nonstandard data types?

· How to embed POJOs inside entities

· How to define relationships between entities

· Using revisions and timestamps to version entities

· How to define common entity ancestors

· How to map Collections and Maps of basic and embedded values

· Using multiple tables to store entities

· Constructing programmatic triggers

· Using load time weaving to lazy load simple properties

WROX.COM CODE DOWNLOADS FOR THIS CHAPTER

You can find the wrox.com code downloads for this chapter at http://www.wrox.com/go/projavaforwebapps on the Download Code tab. The code for this chapter is divided into the following major examples:

· Advanced-Mappings Project

· Customer-Support-v18 Project

NEW MAVEN DEPENDENCIES FOR THIS CHAPTER

There are no new Maven dependencies for this chapter. Continue to use the Maven dependencies introduced in all previous chapters.

WHAT’S LEFT?

So far you’ve done some cool things with JPA. From simple lookups to complex queries to advanced searching, you’ve thoroughly explored the Java Persistence API and its CRUD capabilities.

So what’s left? Well, you’ve only touched the surface on the ways you can map objects to database tables. So far your entities have been very straightforward, containing only basic types that the JPA provider can directly and unambiguously convert to and from relational database field types. In reality, your entities will not be so trivial. You have already seen this problem with the TicketEntity and TicketCommentEntity in the Customer Support application. For example, it would be a lot easier to simply persist the Instantcreation date, but instead you have to use a Timestamp in your entities. Hopefully, JPA 2.2 (or whatever comes next, which may be JPA 3.0) will support the Java 8 Date and Time types natively, but right now you need a different solution.

NOTE To help ensure the inclusion of Java 8 Date and Time support in the next version of JPA, go to https://java.net/jira/browse/JPA_SPEC-63 and vote on the feature request. You need to create a Java.net account or log in using your existing Java.net account to vote.

So as a starting point, take a look at some of the things you still need to learn to make the most out of your JPA entities:

· Convert simple types that aren’t natively supported, such as Instants, LocalDateTimes, java.net.InetAddresses, and more.

· Map POJO property types with entities in addition to basic properties. For example, you might have a PhoneNumber object that contains components of a phone number stored in multiple columns but that isn’t an entity on its own.

· Define one-to-many, many-to-one, and many-to-many relationships. A great example is retrieving the Attachments on a Ticket at the same time that you retrieve the Ticket. Of course, this relationship should be lazy so that you don’t unnecessarily load a bunch ofAttachments while simply listing Tickets. Another example is retrieving a Ticket and the UserPrincipal that created the Ticket at the same time.

· Version entities, keeping track of how many times they are updated.

· Define properties common to many entities in a base class that those entities inherit from. This way, you don’t have to duplicate code for things such as IDs, auditing, and versioning.

· Store key-value pairs as a Map within entities. The values may even need to be other entities.

· Retrofit existing databases and applications. Not every application can start with a domain model and then create a suitable database. In these situations, it sometimes helps to split the data for an entity across several tables, and you need a way to get all this data into one entity.

· Define custom behavior that takes place before or after a CRUD operation on an entity.

As you can see, you’re just getting started. In this chapter, you cover all these topics, and when you’re done, you’ll have virtually unlimited ways to use JPA entities. For most of the chapter, you can follow along in the Advanced-Mappings project, available for download on the wrox.com code download site. This chapter covers the mappings in that project but does not mention the Spring Data JPA repositories, services, controllers, or (simple) user interfaces with which you are already so familiar. At any time during the chapter, you can compile and start the project from your IDE, go to http://localhost:8080/mappings, and use the links on the homepage to test out the entity mappings.

At the end of the chapter, you employ the topics you have mastered to make the task of persisting entities in the Customer Support application easier.

CONVERTING NONSTANDARD DATA TYPES

In Chapter 20, you learned about the @Basic, @Lob, @Enumerated, and @Temporal types that JPA vendors are required to support. This list is extensive, but by no means does it fulfill your every need. You have already seen that it does not include the new Java 8 Date and Time types. (But keep your eyes peeled for support for this in the next version of JPA.) Some providers do support additional types — for example, Hibernate ORM automatically supports Joda Time data types. Hibernate ORM 5.0 might even support the Java 8 Date and Time types at some point. However, relying on this support is non-portable because it’s nonstandard. If you switch providers someday, your entities may stop working.

So what can you do about this? In previous versions of JPA, you couldn’t do anything that was portable. Before JPA 2.1, there was no standard way to persist and retrieve simple types (types that aren’t themselves POJOs) that weren’t natively supported. This greatly limited your options. As a result, most of the major providers supply proprietary APIs that let you define custom data types. Using Hibernate ORM, you can implement org.hibernate.usertype.UserType or org.hibernate.usertype.CompositeUserType and then annotate a property with @org.hibernate.annotations.Type to specify the UserType or CompositeUserType implementation class responsible for that property. However, this is no more portable than relying on nonstandard basic types supported by a particular provider, and it breaks as soon as you switch providers.

This problem was finally resolved in JPA 2.1 with attribute converters, though they are not without their drawbacks.

Understanding Attribute Converters

An attribute converter is any class that implements javax.persistence.AttributeConverter. The purpose of an attribute converter is to convert entity properties between non-supported simple types and supported basic types. This works in nearly all circumstances. Using JDBC, you must eventually convert pretty much any simple type you can imagine into one of the supported basic types before you can save the value to your database. For example, if you create a custom UnsignedLong class capable of holding unsigned long integers, the only way you can get such a value into and out of the database is by calling PreparedStatement’s setBigDecimal method and ResultSet’s getBigDecimal method. This means that you can easily fulfill this need in JPA by implementing an AttributeConverter that converts between UnsignedLongs and BigDecimals.

The special case, ironically, is converting types that involve dates and times. The setDate, setTime, setTimestamp, getDate, getTime, and getTimestamp methods in JDBC deal with java.sql.Dates, Times, and Timestamps. As of JDBC 4.1, you must use these types for setting and retrieving dates and times in a database. JDBC 4.2, you’ll recall, adds methods that support more types than this. Having direct access to the JDBC API and a JDBC 4.2 driver, the following code is the correct way to persist Instant, LocalDateTime, LocalDate, LocalTime, OffsetDateTime, OffsetTime, and ZonedDateTime properties.

statement.setObject(1, instant, JDBCType.TIMESTAMP);

statement.setObject(2, localDateTime, JDBCType.TIMESTAMP);

statement.setObject(3, localDate, JDBCType.DATE);

statement.setObject(4, localTime, JDBCType.TIME);

statement.setObject(5, offsetDateTime, JDBCType.TIMESTAMP_WITH_TIMEZONE);

statement.setObject(6, offsetTime, JDBCType.TIME_WITH_TIMEZONE);

statement.setObject(7, zonedDateTime, JDBCType.TIMESTAMP_WITH_TIMEZONE);

Likewise, you would retrieve these value types with code similar to the following:

instant = resultSet.getObject("instant", Instant.class);

localDateTime = resultSet.getObject("localDateTime", LocalDateTime.class);

localDate = resultSet.getObject("localDate", LocalDate.class);

localTime = resultSet.getObject("localTime", LocalTime.class);

offsetDateTime = resultSet.getObject("offsetDateTime", OffsetDateTime.class);

offsetTime = resultSet.getObject("offsetTime", OffsetTime.class);

zonedDateTime = resultSet.getObject("zonedDateTime", ZonedDateTime.class);

However, an AttributeConverter does not have access to the PreparedStatement and ResultSet objects. It can convert only between the custom type and a target type supported by JPA. Therefore, you must either write AttributeConverters that convert between these types and java.sql.Date, Time, and Timestamp, or you must still use proprietary vendor APIs such as Hibernate’s UserType. You should recall from Chapter 20 that it could be years before all the major relational database vendors provide JDBC 4.2 drivers. Without support for these new methods, resorting to UserType is futile. So your best bet is to stick with AttributeConverters.

Understanding the Conversion Annotations

Implementing an AttributeConverter is just the first step to creating and using the attribute converter. There are also several easily confused annotations that you must utilize to make your converter work. The first of these is @javax.persistence.Converter. A concrete class implementing AttributeConverter must either be annotated with @Converter or specified in a <converter> element in a JPA mapping file (such as orm.xml). Furthermore, if a converter you want to use is not in the root of your persistence unit, or you have <exclude-unlisted-classes> enabled, you must indicate that the converter is a managed class using <class> or <jar-file> in persistence.xml. @Converter’s autoApply attribute (which defaults to false) indicates whether the JPA provider should automatically apply the converter to matching properties. The definition of an attribute converter would therefore normally look like this:

@Converter

public class InstantConverter implements AttributeConverter<Instant, Timestamp>

{

...

}

If autoApply is false or omitted (it defaults to false), you must use the similarly named @javax.persistence.Convert annotation on JPA properties to indicate which properties the converter applies to. You use @Convert’s converter attribute to specify the Class of the applicable converter. You can annotate a field (if you use field property access), accessor method (if you use method property access), or entity with @Convert. If you annotate the entity, you must also specify the attributeName attribute. The following three uses of@Convert are all equivalent.

public class MyEntity

{

@Convert(converter = InstantConverter.class)

private Instant dateCreated;

...

public Instant getDateCreated() { ... }

public void setDateCreated(Instant instant) { ... }

...

}

public class MyEntity

{

private Instant dateCreated;

...

@Convert(converter = InstantConverter.class)

public Instant getDateCreated() { ... }

public void setDateCreated(Instant instant) { ... }

...

}

@Convert(attributeName = "dateCreated", converter = InstantConverter.class)

public class MyEntity

{

private Instant dateCreated;

...

public Instant getDateCreated() { ... }

public void setDateCreated(Instant instant) { ... }

...

}

If you use the latter approach, you may have multiple attributes that need converting. Though it would likely be easier just to annotate the individual properties, you can use the @javax.persistence.Converts annotation to group multiple @Convert annotations at the entity level.

@Converts({

@Convert(attributeName = "dateCreated", converter = InstantConverter.class),

@Convert(attributeName = "dateModified", converter = InstantConverter.class)

})

public class MyEntity

{

private Instant dateCreated;

private Instant dateModified;

...

public Instant getDateCreated() { ... }

public void setDateCreated(Instant instant) { ... }

public Instant getDateModified() { ... }

public void setDateModified(Instant instant) { ... }

...

}

Creating and Using Attribute Converters

In many cases your attribute converters will be very simple. The InstantConverter in the Advanced-Mappings project has only one line of code in each method.

@Converter

public class InstantConverter implements AttributeConverter<Instant, Timestamp>

{

@Override

public Timestamp convertToDatabaseColumn(Instant instant)

{

return instant == null ? null:new Timestamp(instant.toEpochMilli());

}

@Override

public Instant convertToEntityAttribute(Timestamp timestamp)

{

return timestamp == null ? null:Instant.ofEpochMilli(timestamp.getTime());

}

}

The User entity demonstrates use of the custom attribute converter to persist the dateJoined property without having to use a DTO and service to convert the value.

@Entity

@Table(name = "UserPrincipal")

public class User

{

private long id;

private Instant dateJoined;

private String username;

@Id

@Column(name = "UserId")

@GeneratedValue(strategy = GenerationType.IDENTITY)

public long getId() { ... }

public void setId(long id) { ... }

@Convert(converter = InstantConverter.class)

public Instant getDateJoined() { ... }

public void setDateJoined(Instant dateJoined) { ... }

@Basic

public String getUsername() { ... }

public void setUsername(String username) { ... }

}

Finally, in RootContextConfiguration the com.wrox.site.converters package is added to the LocalContainerEntityManagerFactoryBean’s packagesToScan property. This ensures that the converter is added to the persistence unit so that it can be used within the application.

@Bean

public LocalContainerEntityManagerFactoryBean entityManagerFactoryBean()

{

...

factory.setDataSource(this.advancedMappingsDataSource());

factory.setPackagesToScan("com.wrox.site.entities",

"com.wrox.site.converters");

factory.setSharedCacheMode(SharedCacheMode.ENABLE_SELECTIVE);

...

}

EMBEDDING POJOS WITHIN ENTITIES

Sometimes it’s inconvenient for your entity properties to be mere simple types. Consider the classic telephone number conundrum: A Person has a phone number, but you want to store the country code and phone number in separate columns. You could create properties phoneNumberCountryCode and phoneNumberNumber, but that’s awkward. A more desirable solution would be to have a phoneNumber property of type PhoneNumber, which in turn has properties countryCode and number. Such an approach is possible using JPAembeddable types. Embeddable types are intrinsically part of their enclosing entity. They are always stored in the same table as the entity and share the same ID as the entity. They are not and cannot be actual entities.

Indicating That a Type Is Embeddable

In a lot of ways, an embeddable type looks very much like an entity. It can contain any number of properties with annotations such as @Basic, @Column, @Lob, @Temporal, @Enumerated, @Convert, and more. However, it cannot be annotated @Entity or @Table, and it cannot contain any properties annotated @Id or @EmbeddedId. It can contain properties of other embeddable types.

To mark a class as embeddable, all you have to do is annotate it with @javax.persistence.Embeddable. Demonstrated in the following PhoneNumber class, this annotation indicates that it may be embedded as a property within any entity in your application. Like @Entityclasses, @Embeddable classes must be registered as managed classes in your persistence unit. This means you must either specify them in <class> or <jar-file> elements in your persistence unit configuration, leave <exclude-unlisted-classes> disabled in your persistence unit configuration, or include them in the scanned classes discovered by Spring’s LocalContainerEntityManagerFactoryBean. By placing PhoneNumber in the com.wrox.site.entities package, it is automatically discovered and added to the persistence unit.

@Embeddable

public class PhoneNumber

{

private String countryCode;

private String number;

@Basic

@Column(name = "PhoneNumber_CountryCode")

public String getCountryCode() { ... }

public void setCountryCode(String countryCode) { ... }

@Basic

@Column(name = "PhoneNumber_Number")

public String getNumber() { ... }

public void setNumber(String number) { ... }

}

Notice that PhoneNumber contains two properties, each mapped to its own column. Any entity may include a PhoneNumber property provided that entity’s table has two columns named PhoneNumber_CountryCode and PhoneNumber_Number.

Marking a Property as Embedded

Actually embedding an embeddable type is equally easy. Simply mark a property of that type with the @javax.persistence.Embedded annotation. You must not mark the property with any other annotations, such as @Basic, @Temporal, or @Column. This is demonstrated in the Person entity.

@Entity

public class Person

{

private long id;

private String firstName;

private String lastName;

private PhoneNumber phoneNumber;

@Id

@Column(name = "PersonId")

@GeneratedValue(strategy = GenerationType.IDENTITY)

public long getId() { ... }

public void setId(long id) { ... }

@Basic

public String getFirstName() { ... }

public void setFirstName(String firstName) { ... }

@Basic

public String getLastName() { ... }

public void setLastName(String lastName) { ... }

@Embedded

public PhoneNumber getPhoneNumber() { ... }

public void setPhoneNumber(PhoneNumber phoneNumber) { ... }

}

NOTE Use of @Embedded is only for non-ID properties in your entities. You can use embeddable types as composite entity IDs, but you must annotate the ID property with @javax.persistence.EmbeddedId instead of @Embedded. You explored creating composite IDs like this in Chapter 20.

As mentioned earlier, embeddable types can themselves contain embedded properties. A great example of this is the following Address and PostalCode POJOs.

@Embeddable

public class PostalCode

{

private String code;

private String suffix;

@Basic

@Column(name = "PostalCode_Code")

public String getCode() { ... }

public void setCode(String code) { ... }

@Basic

@Column(name = "PostalCode_Suffix")

public String getSuffix() { ... }

public void setSuffix(String suffix) { ... }

}

@Embeddable

public class Address

{

private String street;

private String city;

private String state;

private String country;

private PostalCode postalCode;

@Basic

@Column(name = "Address_Street")

public String getStreet() { ... }

public void setStreet(String street) { ... }

@Basic

@Column(name = "Address_City")

public String getCity() { ... }

public void setCity(String city) { ... }

@Basic

@Column(name = "Address_State")

public String getState() { ... }

public void setState(String state) { ... }

@Basic

@Column(name = "Address_Country")

public String getCountry() { ... }

public void setCountry(String country) { ... }

@Embedded

public PostalCode getPostalCode() { ... }

public void setPostalCode(PostalCode postalCode) { ... }

}

Overriding Embeddable Column Names

The PostalCode is designed so that it can be used on its own or as part of an Address. The problem is the column names. As written, the Person table will have columns Address_Street, Address_City, Address_State, Address_Country, PostalCode_Code, andPostalCode_Suffix. It’s not obvious by these names that the postal code columns are part of the address. You can easily fix this in the Person entity using the @javax.persistence.AttributeOverride annotation, which allows you to alter these column names in the entity in which they are used. You can also use the @javax.persistence.AttributeOverrides annotation, which enables you to group multiple @AttributeOverride annotations.

@Entity

public class Person

{

...

private Address address;

...

@Embedded

@AttributeOverrides({

@AttributeOverride(name = "postalCode.code",

column = @Column(name = "Address_PostalCode_Code")),

@AttributeOverride(name = "postalCode.suffix",

column = @Column(name = "Address_PostalCode_Suffix"))

})

public Address getAddress() { ... }

public void setAddress(Address address) { ... }

}

The name attribute uses dot notation to indicate the property whose column details are being overridden. Because the Address entity contains a property named postalCode, the first part of both names is postalCode. This name is based off the property name, not the property type, so if Address’s PostalCode property were named zip, the first part of the override names would be zip. The second parts of the names are the properties within PostalCode being overridden.

With this dot notation, you can keep specifying overrides deeper and deeper and deeper. Attribute overrides also make it possible to use the same embeddable type multiple times in any given entity (or other embeddable type). You simply need to override all the column names in all but one of the uses. (Although more commonly you would simple override them all.) With these changes, the following statement creates the appropriate table and columns for the Person entity.

CREATE TABLE Person (

PersonId BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,

FirstName VARCHAR(60) NOT NULL,

LastName VARCHAR(60) NOT NULL,

PhoneNumber_CountryCode VARCHAR(5) NOT NULL,

PhoneNumber_Number VARCHAR(15) NOT NULL,

Address_Street VARCHAR(100) NOT NULL,

Address_City VARCHAR(100) NOT NULL,

Address_State VARCHAR(100) NULL,

Address_Country VARCHAR(100) NOT NULL,

Address_PostalCode_Code VARCHAR(10) NOT NULL,

Address_PostalCode_Suffix VARCHAR(5)

) ENGINE = InnoDB;

DEFINING RELATIONSHIPS BETWEEN ENTITIES

As you have already seen, it is very common for entities to be related to other entities. Tickets in the Customer Support application have Attachments, for example, and so far you have had to manage that relationship within the service layer. However, this extra step is unnecessary. You may still want to use it in certain situations, especially if a particular entity has many relationships to other entities. But those special circumstances aside, you can define entity relationships directly within those entities, and the JPA provider can retrieve the entities you need at the time you need them.

Understanding One-to-One Relationships

One-to-one relationships might just be the ones you are least likely to define. A one-to-one relationship means that entity A is related to at most one entity B, and entity B is related to at most one entity A. Generally speaking, such a relationship violates the rules of normal form and accepted object-oriented design practices. However, in some situations it’s a more practical approach to solve a peculiar problem. For example, if an entity contains hundreds of properties, you might want to group those properties into sub-entities where all the related properties belong to their own entity. The following mythical Employee entity demonstrates such a use case.

@Entity

public class Employee

{

private long id;

...

private EmployeeInfo info;

@Id

@Column(name = "EmployeeId")

@GeneratedValue(strategy = GenerationType.IDENTITY)

public long getId() { ... }

public void setId(long id) { ... }

...

@OneToOne(mappedBy = "employee", fetch = FetchType.LAZY,

cascade = CascadeType.ALL, orphanRemoval = true)

public EmployeeInfo getInfo() { ... }

public void setInfo(EmployeeInfo employeeInfo) { ... }

}

@Entity

public class EmployeeInfo

{

private long id;

private Employee employee;

...

@Id

public long getId() { ... }

public void setId(long id) { ... }

@OneToOne(mappedBy = "info")

@Column(name = "EmployeeId")

public Employee getEmployee() { ... }

public void setEmployee(Employee employee) { ... }

...

}

If wanted, you may define the relationship on only one side or the other. For example, you could have the info property in Employee but omit the employee property in EmployeeInfo (or vice versa). In this case, the relationship is bidirectional, so you must annotate theinfo property on Employee and the employee property on EmployeeInfo with @javax.persistence.OneToOne, and both must specify the mappedBy attribute.

This attribute tells the JPA provider which property on the other end of the relationship maps back to “this” entity. The fetch attribute indicates when the related entities should be retrieved from the database using the javax.persistence.FetchType enum.FetchType.EAGER means that the JPA provider must retrieve the values when the entity is retrieved. On the other hand, FetchType.LAZY serves as a hint to the JPA provider that it can wait and fetch the values only when the property is first accessed (which may be never, thus saving a trip to the database). However, JPA providers are not required to support lazy loading, so these values may be loaded eagerly anyway. Here, the fetch attribute defaults to FetchType.EAGER, and that’s usually okay for one-to-one relationships. Hibernate ORM and EclipseLink both support lazy loading, but for one-to-one relationships you must enable class weaving. You learn more about how to do this in the “Refining the Customer Support Application” section.

The cascade attribute of @OneToOne indicates what should happen to the related entity when operations are performed on the entity specifying cascade instructions. Using the java.persistence.CascadeType enum you can specify one or more of the values DETACH, MERGE, PERSIST, REFRESH, and REMOVE. Each one indicates an EntityManager operation that should cascade to the related entity. You can also use ALL as a shortcut for specifying all 5 values. For one-to-one relationships, you generally do not want to specify cascade instructions within the owned entity. However, you may want to specify cascade instructions within the owning entity. The related orphanRemoval attribute indicates whether orphaned entities should be deleted from the database. The default (false) means that if you set theEmployee’s info property to null, the EmployeeInfo record in the database will not be deleted. This is almost always not what you want to happen, so you should specify true, as indicated in the example.

Finally, the optional attribute indicates whether the relationship is optional. If set to false it indicates there must always be a value on both sides of the relationship. It defaults to true, which means that one side of the relationship may be null.

Using One-to-Many and Many-to-One Relationships

One-to-many and many-to-one relationships are a much more common scenario that you will face in your applications. They are very closely related. In fact, whenever you specify a one-to-many relationship on an entity, you often specify a corresponding many-to-one relationship on the other entity. In a one-to-many relationship, one entity A has a relationship to many entities B. This is usually represented by a collection of some sort in A that stores instances of B. A many-to-one relationship is simply the opposite. So in this specific case, entity B already has a many-to-one relationship with entity A.

Entities are related in such a manner by design, not by annotation. If an entity has a one-to-many relationship with another entity, that other entity will necessarily have a many-to-one relationship with the first entity. However, what you can control is whether youadvertise that relationship to JPA. In the Customer Support application you’ve worked on previously, Ticket has a one-to-many relationship to Attachment, and Attachment a many-to-one relationship to Ticket, but the JPA provider knew nothing about this. You need only advertise one or more sides of this relationship if you want to navigate from an entity to its relations. You do this by creating navigational properties, as demonstrated in the Applicant and Resume entities in the Advanced-Mappings project.

@Entity

public class Applicant

{

private long id;

private String firstName;

private String lastName;

private boolean citizen;

private Set<Resume> résumés = new HashSet<>();

@Id

@Column(name = "ApplicantId")

@GeneratedValue(strategy = GenerationType.IDENTITY)

public long getId() { ... }

public void setId(long id) { ... }

@Basic

public String getFirstName() { ... }

public void setFirstName(String firstName) { ... }

@Basic

public String getLastName() { ... }

public void setLastName(String lastName) { ... }

@Basic

public boolean isCitizen() { ... }

public void setCitizen(boolean citizen) { ... }

@OneToMany(fetch = FetchType.LAZY, cascade = CascadeType.ALL,

orphanRemoval = true)

@JoinColumn(name = "ApplicantId")

public Set<Resume> getRésumés() { ... }

public void setRésumés(Set<Resume> résumés) { ... }

}

@Entity

@Table(name = "Applicant_Resume")

public class Resume

{

private long id;

private String title;

private String content;

@Id

@Column(name = "ResumeId")

@GeneratedValue(strategy = GenerationType.IDENTITY)

public long getId() { ... }

public void setId(long id) { ... }

@Basic

public String getTitle() { ... }

public void setTitle(String title) { ... }

@Lob

public String getContent() { ... }

public void setContent(String content) { ... }

}

NOTE Wait, what’s that? Are those character accents in variable and method names? This is legal? Yes! In Java, there is no rule against accented alphabetic characters in type names, method names, and identifiers. Because the word “resume” without accents is a completely different word meaning to restart or un-pause, you wouldn’t want to use that, would you? Unfortunately, the ZIP format is less cooperative than Java. If the entity class were also named Résumé, the Résumé.javafilename would get corrupted in the ZIP file that you downloaded. Because of this, the example code had to leave the accents off the Resume and ResumeRepository class names. However, the RésuméForm class uses accents because it is an inner class ofMainController, and so doesn’t affect the filename. Databases are likewise less tolerant of accented characters in object names, so the Applicant_Resume table lacks the accents.

In this example, an applicant (presumably a job applicant) can have multiple résumés. The Applicant entity defines a navigational property named résumés containing a Set of Resumes. This property allows a piece of code holding an Applicant to navigate to that applicant’s Resumes directly without having to go back to the service or repository. In this case the advertised relationship is unidirectional because the Resume class does not contain a navigational property back to Applicant. But on further thought, you probably want code that obtains a Resume to navigate back to the Applicant who created it. Making the advertised relationship bidirectional is as simple as adding the navigational property to Resume and tweaking the navigational property on Applicant.

@Entity

public class Applicant

{

...

@OneToMany(mappedBy = "applicant", fetch = FetchType.LAZY,

cascade = CascadeType.ALL, orphanRemoval = true)

public Set<Resume> getRésumés() { ... }

public void setRésumés(Set<Resume> résumés) { ... }

}

@Entity

@Table(name = "Applicant_Resume")

public class Resume

{

private long id;

private Applicant applicant;

...

@ManyToOne(fetch = FetchType.EAGER, optional = false)

@JoinColumn(name = "ApplicantId")

public Applicant getApplicant() { ... }

public void setApplicant(Applicant applicant) { ... }

...

}

The @javax.persistence.OneToMany and @javax.persistence.ManyToOne annotations contain many of the same attributes as @OneToOne. @OneToMany lacks an optional attribute because such a concept doesn’t apply to a collection of values. (A collection can always be empty.) Only @OneToMany contains a mappedBy attribute because in a bidirectional one-to-many-to-one relationship, only the one-to-many side needs this information. @OneToMany is also the only one of the two annotations with an orphanRemoval attribute because such an action makes sense only from that side of the relationship.

You may have noticed that the @javax.persistence.JoinColumn annotation moved from original Applicant to the new property in Resume. This annotation, largely similar to the @Column annotation, specifies the column details for the column that joins these two tables. (If the foreign key is composite, you can use @javax.persistence.JoinColumns to group multiple annotations.) In a unidirectional one-to-many relationship, it goes on the only side of the relationship it can: the @OneToMany side (Applicant). For these relationships, it indicates which column in the other entity’s table contains “this” entity’s primary key. However, in a unidirectional many-to-one relationship or a bidirectional one-to-many-to-one relationship, it belongs on the @ManyToOne side of the relationship (Resume). For these relationships @JoinColumn indicates which column in “this” entity’s table contains the other entity’s primary key. It also replaces the @Column annotation for that property.

Instead of a Set of Resumes, you could use a List of Resumes and maintain their order in some fashion. To do this, you annotate the List property (the @OneToMany side) with @javax.persistence.OrderColumn and specify the name of the column in the Applicant_Resume tablethat the Resumes should be ordered by. You can also specify a Map of Resumes. In this case, you need to pick some column from the Applicant_Resume table to serve as the key of the Map. You could annotate the Map property with @javax.persistence.MapKey, which means that the Map keys are the @Id properties of the Resumes. Alternatively, you could use @javax.persistence.MapKeyColumn to specify the name of an Applicant_Resume column, such as Title, to serve as the Map keys.

One final note: With Map properties, you can still use @OrderColumn to get a Map whose entries (as returned by entrySet()) and values (as returned by values()) are ordered according to that column.

The final versions of the Applicant and Resume entities map to the following MySQL schema:

CREATE TABLE Applicant (

ApplicantId BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,

FirstName VARCHAR(60) NOT NULL,

LastName VARCHAR(60) NOT NULL,

Citizen BOOLEAN NOT NULL

) ENGINE = InnoDB;

CREATE TABLE Applicant_Resume (

ResumeId BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,

ApplicantId BIGINT UNSIGNED NOT NULL,

Title VARCHAR(100) NOT NULL,

Content TEXT NOT NULL,

CONSTRAINT Applicant_Resume_Applicant FOREIGN KEY (ApplicantId)

REFERENCES Applicant(ApplicantId) ON DELETE CASCADE

) ENGINE = InnoDB;

Creating Many-to-Many Relationships

Many-to-many relationships are simply a natural extension of one-to-many and many-to-one relationships. In a many-to-many relationship, each side of the relationship can relate to multiple entities on the other side of the relationship. A common example of this is the relationship between a school and a student. A school can have many students, and a student can have multiple schools. Thus, a School entity would have a Set, List, or Map of Students, and the Student entity would in turn have a Set, List, or Map of Schools.

You can advertise one or both sides of a many-to-many relationship using @javax.persistence.ManyToMany. It has the cascade, fetch, and mappedBy attributes that you are well used to. You must specify mappedBy if, and only if, the advertised relationship is bidirectional, and as usual its value should point to the property on the opposite end of the relationship. You specify mappedBy only on one side of the relationship (the non-owner side, in whatever way you define ownership).

When advertising a many-to-many relationship (unidirectional or bidirectional), the JPA vendor attempts to guess the name of the join table and its columns. For your sanity, it’s best to remove the guessing variable and specify the @javax.persistence.JoinTableannotation. You place this annotation only on the owner side of the relationship (the opposite side that you specify mappedBy). In addition to the table name and other details, this annotation contains joinColumns and inverseJoinColumns attributes. You use joinColumnsto specify one or more @JoinColumns that indicate which column or columns “this” (the owning) entity’s primary key maps to. Likewise, you use inverseJoinColumns to specify one or more @JoinColumns that indicate which column or columns the other (owned) entity’s primary key maps to. In the School and Student example described earlier, the mapping would look like this on the School entity:

@ManyToMany(fetch = FetchType.LAZY)

@JoinTable(name = "School_Student",

joinColumns = { @JoinColumn(name = "SchoolId") },

inverseJoinColumns = { @JoinColumn(name = "StudentId") })

public List<Student> getStudents() { ... }

public void setStudents(List<Student> students) { ... }

ADDRESSING OTHER COMMON SITUATIONS

Several other common situations exist that are simple to address and don’t require entire sections by themselves. Do not construe this to mean they aren’t useful; some of them may be the most useful of all for your needs. This includes topics such as versioning entities, defining common entity ancestors, and adding Collections and Maps to your entities, all of which are addressed in this section.

Versioning Entities with Revisions and Timestamps

At the simplest, you may take any approach to versioning entities that you like. The JPA provider treats a version property as it would any other property, and persists and retrieves it normally. However, you can define a special kind of version property that helps the JPA provider ensure integrity and avoid concurrent modifications when performing merge operations. To create such a property, you annotate it with @javax.persistence.Version.

@Version properties can be ints, Integers, longs, Longs, shorts, Shorts, or Timestamps (and perhaps, in a future version, Instants). You must never set the value of a @Version property manually; the JPA provider does this for you. When writing changes for an entity to the database, the provider increments the version in the UPDATE statement and includes a WHERE clause that fails to evaluate to true if the version has already changed. For example, if the table for a versioned entity is MyEntity, the @Id property column is EntityId, and the integer @Version property column is VersionNumber, the UPDATE statement would look something like this:

UPDATE MyEntity SET [other values[,...]], VersionNumber = VersionNumber + 1

WHERE EntityId = ? AND VersionNumber = ?;

If this statement fails to update any records, some other thread or process has deleted or updated the entity already. The JPA provider throws a javax.persistence.OptimisticLockException if this happens.

Because manually setting the @Version property is bad and can cause serious problems in your application, the best practice is to make the mutator method protected or package-private, thus reducing the likelihood that some code will accidentally set it.

UNDERSTANDING OPTIMISTIC LOCKING VERSUS PESSIMISTIC LOCKING

Optimistic locking allows two or more threads to read the same entity simultaneously but allows only one of those threads to update the entity. This can and will prevent concurrent modification of an entity. However, if you prefer to lock the row at the database so that only one thread may read the entity at the any given time, you can do so by specifying the javax.persistence.LockModeType enum constant PESSIMISTIC_READ in any of the EntityManager methods that support it (that is, any methods that affect only one entity at a time). When using Spring Data JPA, you can achieve the same thing by annotating a repository interface method with @org.springframework.data.jpa.repository.Lock and specifying LockModeType.PESSIMISTIC_READ as its valueattribute. If you need to add @Lock to a method in a superinterface, just override it in your interface.

When using pessimistic locking, two types of failures can occur. The first is a failure to obtain a lock that results in the database rolling back the transaction. Such an error is fatal to the transaction and results in ajavax.persistence.PessimisticLockException (also rolling back the JTA transaction if one exists). However, the database locking failure may result in only the rollback of a single statement. In this case, the failure is transient and results in ajavax.persistence.LockTimeoutException. It is up to you to handle the LockTimeoutException by either retrying the statement or rolling back the transaction. In almost all cases, optimistic locking is sufficient, and you shouldn’t need to enable pessimistic locking.

Defining Abstract Entities with Common Properties

After you create several entities, you’ll probably notice that they contain at least a few of the same properties over and over again. For example, you might realize that all your entities have IDs, creation dates, last modification dates, and @Version properties. Instead of redefining these properties every time you create an entity, you can create a mapped superclass that defines the entities once.

A mapped superclass is very much like an entity. It contains no @Table annotation, and the @javax.persistence.MappedSuperclass annotation replaces the @Entity annotation, but otherwise you can map any properties within it just like you would in a normal entity. The properties in a mapped superclass always map to the same table as the eventual @Entity class that extends it. You are not limited to a single mapped superclass in the hierarchy, either. You can define any number of mapped superclasses and an entity inherits the properties of all the mapped superclasses that are its ancestors.

A mapped superclass or entity can override the column mappings of properties it inherits by overriding the accessor method and redefining the @Column annotation. However, this doesn’t work if you use field access instead of method access, so in those cases, you have to annotate the class with @AttributeOverride, specify the property name in the name attribute, and provide the new @Column definition. You cannot override annotations such as @Basic, @Lob, @Temporal, @Enumerated, @Convert, and other JPA type annotations. If a mapped superclass defines a @Transient property, its subclasses cannot override that property to make it non-@Transient. Likewise, a mapped superclass or entity cannot override a non-@Transient method from one of its ancestors and make it @Transient.

The BaseEntity mapped superclass in the Advanced-Mappings project defines the simple id property that all extending entities will have. VersionedEntity, another mapped superclass, extends BaseEntity to specify a @Version property for optimistic locking. Finally, theAuditedEntity mapped superclass extends VersionedEntity and specifies creation and modification date properties.

@MappedSuperclass

public abstract class BaseEntity

{

private long id;

@Id

@GeneratedValue(strategy = GenerationType.IDENTITY)

public long getId() { ... }

public void setId(long id) { ... }

}

@MappedSuperclass

public abstract class VersionedEntity extends BaseEntity

{

private long version;

@Version

@Column(name = "Revision")

public long getVersion() { ... }

void setVersion(long version) { ... }

}

@MappedSuperclass

public abstract class AuditedEntity extends VersionedEntity

{

private Instant dateCreated;

private Instant dateModified;

@Convert(converter = InstantConverter.class)

public Instant getDateCreated() { ... }

public void setDateCreated(Instant dateCreated) { ... }

@Convert(converter = InstantConverter.class)

public Instant getDateModified() { ... }

public void setDateModified(Instant dateModified) { ... }

}

You can now create as many entities as you want that extend any of these mapped superclasses. The NewsArticle entity that follows extends AuditedEntity and inherits all its superclasses’ properties. Its SQL schema reflects the overridden id property column name.

@Entity

@AttributeOverride(name = "id", column = @Column(name = "ArticleId"))

public class NewsArticle extends AuditedEntity

{

private String title;

private String content;

@Basic

public String getTitle() { ... }

public void setTitle(String title) { ... }

@Basic

public String getContent() { ... }

public void setContent(String content) { ... }

}

CREATE TABLE NewsArticle (

ArticleId BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,

Revision BIGINT UNSIGNED NOT NULL,

DateCreated TIMESTAMP(6) NULL,

DateModified TIMESTAMP(6) NULL,

Title VARCHAR(100) NOT NULL,

Content TEXT NOT NULL

) ENGINE = InnoDB;

Like @Entity classes, @MappedSuperclasses must be registered as managed classes in your persistence unit. This means you must either specify them in <class> or <jar-file> elements in your persistence unit configuration, leave <exclude-unlisted-classes> disabled in your persistence unit configuration, or include them in the scanned classes discovered by Spring’s LocalContainerEntityManagerFactoryBean. By placing BaseEntity, VersionedEntity, and AuditedEntity in the com.wrox.site.entities package, they are automatically discovered and added to the persistence unit.

Mapping Basic and Embedded Collections

Up to this point, you have created entities with basic- and embedded-type properties and properties that are collections of other entities. However, sometimes you just need a simple collection of a basic or embedded type. For example, an employee usually has multiple phone numbers and addresses. You could add two or three phone number properties and two or three address properties, but why restrain it this way? It not only violates database normal form, but it violates good object-oriented design practices as well. It makes more sense to have either a List or Set of phone numbers and addresses, depending on whether order is important to you. In JPA you mark a field that is a collection of basic or embedded types with the @javax.persistence.ElementCollection annotation.

@ElementCollection has a targetClass attribute that may or may not be required in certain situations. It specifies the type of the elements stored in the Collection. If you use an untyped Collection (which you should really never, ever do because it is unsafe), you mustspecify the targetClass attribute. However, as long as you use generics (such as List<String> or Set<Address>) the JPA provider can discover the element type (String, Address) automatically and you must not specify the targetClass attribute. The fetch attribute indicates whether the Collection values should be retrieved from the database eagerly or lazily, and defaults to FetchType.LAZY.

You are now probably wondering about the restriction described earlier on embedded types: They must exist within the same table as the entity that contains them. Well, this is only part of the story. They may exist in a separate table if they are part of a collection property. In this case, the collection defaults to storing in a table whose name is equal to the containing entity’s table name followed by an underscore followed by the property name. Certain assumptions are also made about the column names based on the types and property names. Consider the Employee entity as an example.

@Entity

public class Employee

{

private long id;

private String firstName;

private String lastName;

private List<String> phoneNumbers = new ArrayList<>();

private Set<Address> addresses = new HashSet<>();

@Id

@Column(name = "EmployeeId")

@GeneratedValue(strategy = GenerationType.IDENTITY)

public long getId() { ... }

public void setId(long id) { ... }

@Basic

public String getFirstName() { ... }

public void setFirstName(String firstName) { ... }

@Basic

public String getLastName() { ... }

public void setLastName(String lastName) { ... }

@ElementCollection(fetch = FetchType.EAGER)

public List<String> getPhoneNumbers() { ... }

public void setPhoneNumbers(List<String> phoneNumbers) { ... }

@ElementCollection(fetch = FetchType.LAZY)

public Set<Address> getAddresses() { ... }

public void setAddresses(Set<Address> addresses) { ... }

}

With these default mappings, an Employee’s phone numbers are assumed to reside in the database table Employee_PhoneNumbers with a foreign key column named EmployeeId and a PhoneNumber column containing the phone number. The Employee’s addresses are assumed to reside in Employee_Addresses, also with an EmployeeId foreign key column. However, because the Address class you created earlier in the chapter is an embeddable type with its own mappings, the JPA provider doesn’t have to make assumptions and knows the columns for it are Address_Street, Address_City, Address_State, Address_Country, PostalCode_Code, and PostalCode_Suffix. These may not be the names you want, so you can customize them:

@Entity

public class Employee

{

...

@ElementCollection(fetch = FetchType.EAGER)

@CollectionTable(name = "Employee_Phone", joinColumns = {

@JoinColumn(name = "Employee", referencedColumnName = "EmployeeId")

})

@OrderColumn(name = "Priority")

public List<String> getPhoneNumbers() { ... }

public void setPhoneNumbers(List<String> phoneNumbers) { ... }

@ElementCollection(fetch = FetchType.LAZY)

@CollectionTable(name = "Employee_Address", joinColumns = {

@JoinColumn(name = "Employee", referencedColumnName = "EmployeeId")

})

@AttributeOverrides({

@AttributeOverride(name = "street", column =@Column(name = "Street")),

@AttributeOverride(name = "city", column = @Column(name = "City")),

@AttributeOverride(name = "state", column = @Column(name = "State")),

@AttributeOverride(name = "country", column=@Column(name = "Country"))

})

public Set<Address> getAddresses() { ... }

public void setAddresses(Set<Address> addresses) { ... }

}

The @javax.persistence.CollectionTable annotation allows you to customize the table name and the columns that it joins on. @javax.persistence.OrderColumn enables you to specify the column that orders the elements in the Collection (which applies only if the collection is a List instead of a Set). In a Collection holding a basic type, you can use @Column to specify the name of the column in which the values are stored inside the collection table. However, if the Collection holds embedded types, you must use @AttributeOverrideand @AttributeOverrides. As mapped here, the Employee entity resides in the following MySQL schema:

CREATE TABLE Employee (

EmployeeId BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,

FirstName VARCHAR(50) NOT NULL,

LastName VARCHAR(50) NOT NULL

) ENGINE = InnoDB;

CREATE TABLE Employee_Phone (

Employee BIGINT UNSIGNED NOT NULL,

Priority SMALLINT UNSIGNED NOT NULL,

Number VARCHAR(20) NOT NULL,

CONSTRAINT Employee_Phone_Employee FOREIGN KEY (Employee)

REFERENCES Employee (EmployeeId) ON DELETE CASCADE

) ENGINE = InnoDB;

CREATE TABLE Employee_Address (

Employee BIGINT UNSIGNED NOT NULL,

Street VARCHAR(100) NOT NULL,

City VARCHAR(100) NOT NULL,

State VARCHAR(100) NULL,

Country VARCHAR(100) NOT NULL,

PostalCode_Code VARCHAR(10) NOT NULL,

PostalCode_Suffix VARCHAR(5),

CONSTRAINT Employee_Address_Employee FOREIGN KEY (Employee)

REFERENCES Employee(EmployeeId) ON DELETE CASCADE

) ENGINE = InnoDB;

NOTE In addition to embeddable types and the standard basic types, you can also store types that require conversion within Collection properties. Just annotate the Collection property with @Convert as if it were any other basic property.

Persisting a Map of Key-Value Pairs

Although its name suggests it applies only to Collection properties, @ElementCollection can also mark a Map property for persistence. Map properties are always stored with the key and value in the same table, which has the same default name as Collection properties and which you can also customize using @CollectionTable. You use @Column to specify the name of the column that stores the Map value, and @javax.persistence.MapKeyColumn to specify the name of the column that stores the map key. Map properties should also always use generics, but if for some reason they cannot, use the targetClass attribute of @ElementCollection to specify the value class and @javax.persistence.MapKeyClass to specify the key class.

Both the key and the value may be any of the basic types, including enum and temporal types, any types that require an attribute converter, and any embeddable types. As with Collection properties, you do not have to specify the @Embedded annotation on Mapproperties whose keys or values are embeddable types. Enumerated types follow the same semantics as for basic properties, and you can override these semantics with the usual @Enumerated annotation for the Map value and the @javax.persistence.MapKeyEnumeratedannotation for the key. Likewise, you can customize the semantics for temporal types with the usual @Temporal for the value and @javax.persistence.MapKeyTemporal for the key.

Using Map properties is a great way to store supplemental entity properties that you don’t know about at compile time, such as custom fields. This is demonstrated with the following new Employee property and the table to which it’s mapped.

@Entity

public class Employee

{

...

private Map<String, String> extraProperties = new HashMap<>();

...

@ElementCollection(fetch = FetchType.EAGER)

@CollectionTable(name = "Employee_Property", joinColumns = {

@JoinColumn(name = "Employee", referencedColumnName = "EmployeeId")

})

@Column(name = "Value")

@MapKeyColumn(name = "KeyName")

public Map<String, String> getExtraProperties() { ... }

public void setExtraProperties(Map<String, String> extraProperties) { ... }

}

CREATE TABLE Employee_Property (

Employee BIGINT UNSIGNED NOT NULL,

KeyName VARCHAR(100) NOT NULL,

Value VARCHAR(255) NOT NULL,

CONSTRAINT Employee_Property_Employee FOREIGN KEY (Employee)

REFERENCES Employee(EmployeeId) ON DELETE CASCADE

) ENGINE = InnoDB;

Storing an Entity in Multiple Tables

Although an unlikely and unusual scenario, you can persist an entity in multiple tables. Don’t confuse this with the concept of Collection or Map properties residing in separate tables — that’s an expected practice mandated by good normal form. Instead, this particular scenario actually breaks normal form by storing basic properties of an entity in separate tables. In an absurd example, an Employee’s first name might be stored in table Employee1, whereas his last name is stored in table Employee2. This is usually a symptom of legacy databases, poorly designed databases retrofitted for object-relational mapping, or entities that exceed the number of columns permitted in a single table by the underlying database vendor.

By default, all the non-Collection, non-Map properties of an entity are assumed to reside in the primary table. This is the table specified in @Table or, in the absence of @Table, the table with the same name as the entity. If some of an entity’s properties reside in a secondary table, you should annotate it with @javax.persistence.SecondaryTable to specify the name and (optionally) other details of the table. If an entity has multiple secondary tables, you can use @javax.persistence.SecondaryTables to group multiple @SecondaryTableannotations. Then, throughout the entity, each property should be annotated with @Column to indicate which table it belongs to. The @Id property must always reside in the primary table.

A secondary table is assumed to have a column of the same name and type as the primary key column of the primary table, and that column should be the primary key for the secondary table. You can customize the details of this column in the @SecondaryTableannotation.

Though mapping an entity to multiple tables is not demonstrated in the Advanced-Mappings project, the Employee entity might look something like this in such a scenario:

@Entity

@Table(name = "Employee")

@SecondaryTables({

@SecondaryTable(name = "Employee2", pkJoinColumns = {

@PrimaryKeyJoinColumn(name = "Employee",

referencedColumnName = "EmployeeId")

})

})

public class Employee

{

...

@Id

@Column(name = "EmployeeId")

@GeneratedValue(strategy = GenerationType.IDENTITY)

public long getId() { ... }

public void setId(long id) { ... }

@Basic

@Column(name = "FirstName", table = "Employee")

public String getFirstName() { ... }

public void setFirstName(String firstName) { ... }

@Basic

@Column(name = "LastName", table = "Employee2")

public String getLastName() { ... }

public void setLastName(String lastName) { ... }

...

}

CREATING PROGRAMMATIC TRIGGERS

To a large extent, the relational database triggers that you may have used in the past should exist as business logic in your services. This removal of all business logic from the database is the last step to completely abstracting your application from its storage mechanism and allowing you to easily switch out the storage mechanism as the need arises. Equally important, it reinforces the view of the application as the goal and the database as simply a means to that goal. However, there may be times where your entity needs a little bit of persistence logic of its own. The classic example is when updating versioning- and auditing-related fields. (Although you may simply want to let Spring Data take care of that for you.) Though not strictly related to mapping, JPA allows you to add special annotations to your entities that define programmatic triggers in Java code instead of relying on database triggers.

Acting before and after CRUD Operations

You can define a trigger on any entity by creating a method that executes the logic you want and annotating it with one of the trigger annotations. These methods, officially called lifecycle event handlers or lifecycle event callback methods, are instance methods and have access to all the properties of the entity. This means that the methods can use and modify those properties however you want. In addition to annotating methods in a concrete entity, you can also annotate methods on a mapped superclass to create triggers that apply to all entities that inherit from that mapped superclass. Of course, such triggers could safely use and modify only the properties of the mapped superclass and its ancestors.

The @javax.persistence.PostLoad annotation defines a read trigger, executing after the entity is constructed and populated from the ResultSet. It is the only trigger annotation that does not have a counterpart that executes before the operation because that would not be possible for reads. The other annotations, all in the same package, are as follows:

· @PrePersist methods are executed before the entity is persisted: immediately after the persist method is called on the EntityManager and immediately before the entity is actually attached to the EntityManager. Note that in a long-running transaction, it could be a long time after this method executes before the entity is actually written to the database.

· @PostPersist methods are invoked immediately after the entity is actually written to the database (either during flush or commit, whichever comes first). The transaction could still roll back after this method is called.

· @PreUpdate methods are invoked as soon as the EntityManager detects that the entity has changed. It’s important to understand that, when using JPA, you don’t actually have to call merge to update an entity unless the instance is modified after a transaction is committed. Calling any of the mutator methods on an entity changes the entity, and those changes are written to the database when the transaction is committed even if you do not call merge. As soon as the entity has been changed by a mutator invocation, the@PreUpdate trigger fires. In a long-running transaction, it could be a long time after this method executes before the entity is actually written to the database.

· @PostUpdate methods are executed immediately after the changes to the entity are written to the database. The transaction could still roll back after this method is called.

· @PreRemove methods are invoked when the entity is marked for removal (deletion) from the EntityManager. In a long-running transaction, it could be a long time after this method executes before the entity is actually written to the database.

· @PostRemove methods are invoked immediately after the entity is actually deleted from the database. The transaction could still roll back after this method is called.

A trigger method may serve as a trigger for multiple events (for example, it could be annotated with both @PrePersist and @PreUpdate), but an entity may have no more than one method annotated for a particular event. This includes inherited trigger methods, meaning for example that a @PostRemove method in an entity will disable any @PostRemove methods that it inherits from mapped superclasses. However, it won’t disable that same method from being called for other events it may be annotated for, such as @PostUpdate.

Trigger methods must return void and have no arguments. They can be named anything and be public, protected, package-private, or private, but they must not be static. To prevent unusual behavior, they should never call EntityManager or Query methods or access any other entities. If a trigger method throws an exception, the transaction is rolled back. (So you could, for example, use trigger methods to prevent illegal modifications.)

The Person entity you created earlier demonstrates use of all the different trigger methods to log the life cycle of an entity.

@Entity

public class Person

{

...

private static final Logger log = LogManager.getLogger();

@PostLoad void readTrigger()

{

log.debug("Person entity read.");

}

@PrePersist void beforeInsertTrigger()

{

log.debug("Person entity about to be inserted.");

}

@PostPersist void afterInsertTrigger()

{

log.debug("Person entity inserted into database.");

}

@PreUpdate void beforeUpdateTrigger()

{

log.debug("Person entity just updated by call to mutator method.");

}

@PostUpdate void afterUpdateTrigger()

{

log.debug("Person entity just updated in the database.");

}

@PreRemove void beforeDeleteTrigger()

{

log.debug("Person entity about to be deleted.");

}

@PostRemove void afterDeleteTrigger()

{

log.debug("Person entity about deleted from database.");

}

}

Using Entity Listeners

Entity listeners are closely related to the trigger methods you just read about. An entity listener is a construct for defining trigger methods outside of an entity class. These methods are called external trigger methods or external lifecycle event handlers, in contrast to internal trigger methods you just learned about that exist as part of the entity class or its superclasses. External trigger methods enable you to keep this logic truly separate from your entity classes. Entity listeners must have public, no-argument constructors and may define any or all the trigger methods previously described. However, there are a few minor differences:

· The external trigger methods defined in an entity listener must have a single argument: the entity that caused the lifecycle event. You can make the type of the argument as vague (a mapped superclass or even just Object) or as specific (the exact entity type) as you want.

· Entity listeners are inherited from mapped superclasses just like internal trigger methods are, except they do not override inherited entity listeners. This means that you can have multiple trigger methods execute for a particular lifecycle event on the same entity.

When executing trigger methods, the provider first invokes all external trigger methods and then invokes all internal trigger methods. External trigger methods in entity listeners execute starting at the highest point in the mapped superclass ancestry and complete with the actual entity class. You can also define default entity listeners that execute before all others. These entity listeners apply to all entities anywhere in your application, but you can define a default entity listener only in a mapping file (such as orm.xml).

Writing an entity listener class is as simple as creating a class and adding trigger methods to it. The class does not have to implement any interfaces or extend any superclasses. (However, it certainly can implement or extend other types if that helps you in some way.) After you create an entity listener, you have to attach it to an entity using the @javax.persistence.EntityListeners annotation. You can place this annotation on an entity class or mapped superclass to attach the listener or listeners to that entity class or mapped superclass, for example:

@EntityListeners(Listener1.class)

@MappedSuperclass

public abstract class AbstractEntity

{

...

}

@EntityListeners({ Listener2.class, Listener3.class })

@Entity

public class ConcreteEntity extends AbstractEntity

{

...

}

If you create an entity that extends a mapped superclass but you do not want to inherit the mapped superclass’s entity listeners, you can annotate the entity with @javax.persistence.ExcludeSuperclassListeners. You can also annotate a mapped superclass with@ExcludeSuperclassListeners to stop inheritance of its superclass listeners for it and for its subclasses. Likewise, you can annotate an entity or mapped superclass @javax.persistence.ExcludeDefaultListeners to exclude all default listeners for it and for its subclasses.

REFINING THE CUSTOMER SUPPORT APPLICATION

With the tools that you have learned about in this chapter, you can make many improvements to the Customer Support application you have been working on throughout this book. To start with, you can stop using Ticket as a DTO for TicketEntity, get rid ofTicketEntity altogether, and start using Ticket as an entity directly. This is made possible using the InstantConverter you created earlier in the chapter.

You can also define a relationship between Tickets and Attachments so that you can automatically retrieve a ticket’s attachments when you retrieve the Ticket entity. Speaking of attachments, you can now add attachments to TicketComments, using a join table and a relationship with Attachment similar to that in Ticket. You can also directly associate the UserPrincipal to a Ticket or TicketComment. Finally, you can ensure that an Attachment’s content is loaded lazily so that you don’t load potentially hundreds of megabytes of data when listing the attachments on a ticket and its comments. The Customer-Support-v18 project, available for download from the wrox.com code download site, accomplishes all these things.

Mapping a Collection of Attachments

Because you’ll now use Attachments for both Tickets and TicketComments, the Attachment entity no longer needs a ticketId property, and the corresponding table no longer needs a TicketId column. Instead, use the following join tables, found in create.sql, to relateAttachments to both Tickets and TicketComments.

USE CustomerSupport;

CREATE TABLE Ticket_Attachment (

SortKey SMALLINT NOT NULL,

TicketId BIGINT UNSIGNED NOT NULL,

AttachmentId BIGINT UNSIGNED NOT NULL,

CONSTRAINT Ticket_Attachment_Ticket FOREIGN KEY (TicketId)

REFERENCES Ticket (TicketId) ON DELETE CASCADE,

CONSTRAINT Ticket_Attachment_Attachment FOREIGN KEY (AttachmentId)

REFERENCES Attachment (AttachmentId) ON DELETE CASCADE,

INDEX Ticket_OrderedAttachments (TicketId, SortKey, AttachmentId)

) ENGINE = InnoDB;

CREATE TABLE TicketComment_Attachment (

SortKey SMALLINT NOT NULL,

CommentId BIGINT UNSIGNED NOT NULL,

AttachmentId BIGINT UNSIGNED NOT NULL,

CONSTRAINT TicketComment_Attachment_Comment FOREIGN KEY (CommentId)

REFERENCES TicketComment (CommentId) ON DELETE CASCADE,

CONSTRAINT TicketComment_Attachment_Attachment FOREIGN KEY (AttachmentId)

REFERENCES Attachment (AttachmentId) ON DELETE CASCADE,

INDEX TicketComment_OrderedAttachments (CommentId, SortKey, AttachmentId)

) ENGINE = InnoDB;

If you have been running previous versions of the Customer Support application in earlier chapters, you need to migrate the data in the TicketId column to the Ticket_Attachment table and then drop the TicketId column. The following statements, commented out increate.sql, take care of this for you.

USE CustomerSupport;

INSERT INTO Ticket_Attachment (SortKey, TicketId, AttachmentId)

SELECT @rn := @rn + 1, TicketId, AttachmentId

FROM Attachment, (SELECT @rn:=0) x

ORDER BY TicketId, AttachmentName;

CREATE TEMPORARY TABLE $minSortKeys ENGINE = Memory (

SELECT min(SortKey) as SortKey,TicketId FROM Ticket_Attachment GROUP BY TicketId

);

UPDATE Ticket_Attachment a SET a.SortKey = a.SortKey - (

SELECT x.SortKey FROM $minSortKeys x WHERE x.TicketId = a.TicketId

) WHERE TicketId > 0;

DROP TABLE $minSortKeys;

ALTER TABLE Attachment DROP FOREIGN KEY Attachment_TicketId;

ALTER TABLE Attachment DROP COLUMN TicketId;

Mapping the Ticket-Attachment and TicketComment-Attachment relationships is easy. First, the Ticket no longer has a getAttachment method to retrieve an Attachment by name. Attachments are not strictly related to Tickets anymore, so individual retrieval must use the ID instead of the name. The following mapping joins the Ticket and Attachment entities:

@OneToMany(fetch = FetchType.LAZY, cascade = CascadeType.ALL,

orphanRemoval = true)

@JoinTable(name = "Ticket_Attachment",

joinColumns = { @JoinColumn(name = "TicketId") },

inverseJoinColumns = { @JoinColumn(name = "AttachmentId") })

@OrderColumn(name = "SortKey")

@XmlElement(name = "attachment")

@JsonProperty

public List<Attachment> getAttachments()

{

return this.attachments;

}

Whereas the following joins TicketComment with its Attachments:

@OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL,

orphanRemoval = true)

@JoinTable(name = "TicketComment_Attachment",

joinColumns = { @JoinColumn(name = "CommentId") },

inverseJoinColumns = { @JoinColumn(name = "AttachmentId") })

@OrderColumn(name = "SortKey")

@XmlElement(name = "attachment")

@JsonProperty

public List<Attachment> getAttachments()

{

return this.attachments;

}

Notice that a ticket’s attachments are loaded lazily, whereas a comment’s attachments are loaded eagerly. Why? When listing tickets, you don’t list their attachments. Therefore, you have no reason to load the information that you don’t need. Instead, in theDefaultTicketService you can tell Hibernate to load the attachments for an individually retrieved ticket by calling a method on the attachment list during the transaction. (Remember that getNumberOfAttachments calls the size method of the List<Attachment>.)

@Override

@Transactional

public Ticket getTicket(long id)

{

Ticket ticket = this.ticketRepository.findOne(id);

ticket.getNumberOfAttachments();

return ticket;

}

However, this is different for comments. When listing the comments while viewing a ticket, you also want to list their attachments. Thus, you load comments’ attachment lists eagerly. Because Attachment is shared between both these entities, the relationship is unidirectional. (Attachment does not have @ManyToOne Ticket or Comment properties.) Attachment does not have any navigational properties back to Ticket or TicketComment.

One of the upsides to this change is the improved simplicity of the DefaultTicketService. You no longer have to translate between DTOs and entities, eliminating a lot of code. You can view the refactored service, in addition to the updated controller and views, in the downloaded project.

Lazy Loading Simple Properties with Load Time Weaving

Now that both tickets and their comments can have attachments, you needlessly load a significant amount of data just by viewing a ticket. If a ticket has several 10-megabyte attachments, and each comment has several 10-megabyte attachments, viewing a ticket could load hundreds of megabytes of data that isn’t being used. This would be an enormous performance problem. What you really need to do is lazy load the Attachment’s contents property:

@Lob

@Basic(fetch = FetchType.LAZY)

@XmlElement

@XmlSchemaType(name = "base64Binary")

@JsonProperty

public byte[] getContents()

{

return this.contents;

}

Then in the DefaultTicketService you would load this content only when getting an individual ticket (for download):

@Override

@Transactional

public Attachment getAttachment(long id)

{

Attachment attachment = this.attachmentRepository.findOne(id);

if(attachment != null)

attachment.getContents();

return attachment;

}

However, it isn’t that simple. Lazy loading works automatically for properties that are Maps and Collections (Lists and Sets) because those are interfaces, and Hibernate ORM comes with proxy implementations of those interfaces that run the necessary queries to load the data only when the Map or Collection property is used in some way (its size is calculated, it is iterated over, and so on). For you to lazy load simple properties of types like byte[] or String, or @OneToOne or @ManyToOne properties, Hibernate must instrument the bytecode of the entities so that it can intercept the method calls to retrieve those properties. It cannot do this out-the-door without some configuration.

First, you have to set up an environment capable of bytecode instrumentation. You have three different ways to do this:

· Attach a Java agent to the JVM (see the java command’s -agent argument), which uses a class file transformer to inspect and, if necessary, transform all classes loaded by the class loaders in that JVM. This is a bit heavy-handed for an application server or Servlet container environment, though. You need something that applies to a single application.

· Use Hibernate’s org.hibernate.tool.instrument.InstrumentTask Ant task (either in your Ant script or through the Ant plug-in in your Maven POM). This task modifies the bytecode of your entities at build time, right after you compile them and before you deploy your application. Other O/RMs provide similar mechanisms for decorating bytecode at build time.

· Use load-time bytecode weaving. You see this final approach in the Customer-Support-v18 project.

With Spring Framework’s load time weaving feature, you can transform classes when they are loaded from their class files using one of several pluggable org.springframework.instrument.classloading.LoadTimeWeaver implementations. The fallback implementation is one that uses a Java agent as previously mentioned, but this is not necessary here. A better option is to use a weaver that takes advantage of the instrumentable ClassLoader provided by the container.

GlassFish, JBoss, WebLogic, and WebSphere all provide instrumentable ClassLoaders that Spring can take advantage of. Prior to Tomcat 8.0 you had to tell Tomcat to use a special ClassLoader (provided by Spring) that extended the default Tomcat ClassLoader. However, Tomcat 8.0 now provides an instrumentable ClassLoader that Spring can use automatically.

Configuring Spring Framework’s load time weaving is as simple as adding @org.springframework.context.annotation.EnableLoadTimeWeaving to the RootContextConfiguration. It automatically detects and uses Tomcat’s instrumentable ClassLoader. Telling Hibernate ORM to use this load time weaver requires one additional Hibernate property.

...

@EnableLoadTimeWeaving

...

public class RootContextConfiguration implements

AsyncConfigurer, SchedulingConfigurer, TransactionManagementConfigurer

{

...

@Bean

public LocalContainerEntityManagerFactoryBean entityManagerFactoryBean()

{

...

properties.put("hibernate.ejb.use_class_enhancer", "true");

...

}

...

}

The final thing you must consider is XML and JSON serialization of these entities. However you instrument these classes (statically at build time, with an agent, or dynamically with load time weaving), Hibernate can add any number of unspecified fields and methods to your entities. This is okay because they won’t interfere with your normal use of these entities, but JAXB (which you use for XML serialization) and Jackson Data Processor (which you use for JSON serialization) do not know what to do with these fields and methods. The solution to this is telling JAXB and Jackson to ignore the properties of your entities by default using @XmlAccessorType and @JsonAutoDetect, adding @XmlElement and @JsonProperty to the properties that you do want serialized, and removing@XmlTransient and @JsonIgnore from the properties that you do not want serialized.

...

@XmlAccessorType(XmlAccessType.NONE)

@JsonAutoDetect(creatorVisibility = JsonAutoDetect.Visibility.NONE,

fieldVisibility = JsonAutoDetect.Visibility.NONE,

getterVisibility = JsonAutoDetect.Visibility.NONE,

isGetterVisibility = JsonAutoDetect.Visibility.NONE,

setterVisibility = JsonAutoDetect.Visibility.NONE)

public class Ticket implements Serializable

{

...

@XmlElement

@JsonProperty

public long getId() { ... }

...

}

...

@XmlAccessorType(XmlAccessType.NONE)

@JsonAutoDetect(...)

public class TicketComment implements Serializable

{

...

@XmlElement

@JsonProperty

public long getId() { ... }

...

}

...

@XmlAccessorType(XmlAccessType.NONE)

@JsonAutoDetect(...)

public class Attachment implements Serializable

{

...

@XmlElement

@JsonProperty

public long getId() { ... }

...

}

After you review all the changes to the Customer Support application, compile the project, start Tomcat from your IDE, and go to http://localhost:8080/support in your browser. Log in, create a ticket or two, and add comments with attachments to see how it all works. If you place a breakpoint in the TicketController’s view/{ticketId} method, you’ll see that any attachments on the ticket or ticket comments have null contents fields. The contents are loaded only when downloaded through the attachment/{attachmentId}method.

NOTE You may notice that the RESTful and SOAP web services don’t work fully anymore. This is because a Ticket requires a UserPrincipal and won’t save without it. You fix this in Chapter 28 when you learn about securing your web services.

SUMMARY

In this chapter you learned just about everything else you need to know about mapping entities in JPA. You explored creating attribute converters to handle nonstandard types and embedding POJOs within your entities using @Embeddable and @Embedded, and creating relationships between entities that are automatically or lazily loaded. You also saw how to version entities and create common entity ancestors, add Collections and Maps of basic and embedded values to your entities, store entities in multiple tables, and finally create triggers in Java code that activate before or after various CRUD operations. You also further refined the Customer Support application to get rid of its DTOs, simplify its service layer, and enhance ticket comments with attachments.

This concludes Part III of this book. It did not cover every minute detail of JPA and its APIs, and it omitted much of the XML mapping syntax that provides an alternative to using annotations. Instead, it focused on the critical tools that you need every day in your applications and showed you how to use these tools smarter with libraries like Spring Framework, Spring Data JPA, and Hibernate Search. The few tidbits you may still want to know about JPA, such as its XML mapping syntax, you can now read about and easily understand simply by downloading the specification document.

In Part IV, the final part of this book, you’ll explore keeping your application secure from unauthorized access using Spring Security and related tools.