Implementing a java agent to instrument code

4186-12920With a system running 24/7, you have to make sure that it performs well at any time of the day. Several commercial solutions exist to monitor the performance of systems: NewRelic, GraphDat and many others. They allow to see for instance if the api call response time change week after week or after each release of the project. So the developers can easily spot where the bottlenecks are and fix them.

You can also use profilers such as JProfiler, YourKit, … to detect bottlenecks, memory leaks, thread leaks, …

Most of those tools works by using a java agent, a pluggable library that runs embedded in a JVM that intercepts the classloading process. By modifying the classloading process, they can dynamically change the classes instructions to perform method logging, performance measure, …

In this post we are going to describe how to implement a simple java agent to measure how frequently and how long some methods of your application take and publish the results to JMX and to Graphite.
We will use the following technologies:

  • javassist: library to manipulate java bytecode
  • coda hale metrics: library that provides classes to measure time and compute average time, mean rate, percentiles, …

To follow the instructions in this post, you would need:

You can get the code presented in this post from github:

git clone https://github.com/fredang/agent-metric-example.git

Agent

To start a JVM with a specific agent, you just need to pass a special parameter to the java program:

java -javaagent:<agent.jar>=<agent_arguments> <mainClass>

At startup, the JVM invokes the method premain of the agent class with the argument of the agent and the Instrumentation class that we can use to apply some code transformation. It knows which class in the jar file to use by looking at the file MANIFEST.MF.

 

public class MetricAgent {
    private final static Logger logger = LoggerFactory.getLogger(MetricAgent.class);

    public static void premain(String agentArguments, Instrumentation instrumentation) {
    	RuntimeMXBean runtimeMxBean = ManagementFactory.getRuntimeMXBean();
    	logger.info("Runtime: {}: {}", runtimeMxBean.getName(), runtimeMxBean.getInputArguments());
        logger.info("Starting agent with arguments " + agentArguments);

    	MetricReporter.startJmxReporter();

    	if (agentArguments != null) {
        	// parse the arguments:
        	// graphite.host=localhost,graphite.port=2003
        	Map<String, String> properties = new HashMap<String, String>();
        	for(String propertyAndValue: agentArguments.split(",")) {
        		String[] tokens = propertyAndValue.split(":", 2);
        		if (tokens.length != 2) {
        			continue;
        		}
        		properties.put(tokens[0], tokens[1]);

        	}

    		String graphiteHost = properties.get("graphite.host");
        	if (graphiteHost != null) {
        		int graphitePort = 2003;
        		String graphitePrefix = properties.get("graphite.prefix");
        		if (graphitePrefix == null) {
        			graphitePrefix = "test";
        		}
        		String graphitePortString = properties.get("graphite.port");
        		if (graphitePortString != null) {
	        		try {
	        			graphitePort = Integer.parseInt(graphitePortString);
	        		} catch (Exception e) {
	        			logger.info("Invalid graphite port {}: {}", e.getMessage());
	        		}
        		}
        		MetricReporter.startGraphiteReporter(graphiteHost, graphitePort, graphitePrefix);
        	}
        }

    	// define the class transformer to use
        instrumentation.addTransformer(new TimedClassTransformer());
    }
}

In this agent, we are using a MetricReporter to compute and publish the measures to JMX and Graphite by using the coda hale metrics library:

public class MetricReporter {
	private final static Logger logger = LoggerFactory.getLogger(MetricReporter.class);

	private static MetricRegistry metricRegistry = new MetricRegistry();

	public static void startJmxReporter() {
		logger.info("Init JMX reporter");

		JmxReporter jmxReporter = JmxReporter
				.forRegistry(metricRegistry)
				.convertDurationsTo(TimeUnit.MILLISECONDS)
				.convertRatesTo(TimeUnit.MINUTES)
				.build();
		jmxReporter.start();
	}

	public static void startGraphiteReporter(String host, int port, String prefix) {
		logger.info("Init Graphite reporter: host={}, port={}, prefix={}", host, port, prefix);
		Graphite graphite = new Graphite(new InetSocketAddress(host, port));
		GraphiteReporter graphiteReporter = GraphiteReporter.forRegistry(metricRegistry)
				.prefixedWith(prefix)
				.build(graphite);
		graphiteReporter.start(1, TimeUnit.MINUTES);
	}

	// called by instrumented methods
	public static void reportTime(String name, long timeInMs) {
		Timer timer = metricRegistry.timer(name);
		timer.update(timeInMs, TimeUnit.MILLISECONDS);
	}
}

Coda Hale Metrics provides several class to measure times. Here, we are using the Timer class. Each time we want to log a measure, we invoke the method Timer.update with the time it took to execute an action. And the class takes care of computing the mean time of the executions, the mean rate, the min, the max, …

In the agent code, we add a code transformer(TimedClassTransformer) to transform all the classes loaded by the classLoader.

public class TimedClassTransformer implements ClassFileTransformer {
	private final static Logger logger = LoggerFactory.getLogger(TimedClassTransformer.class);
	private ClassPool classPool;

	public TimedClassTransformer() {
		classPool = new ClassPool();
		classPool.appendSystemPath();
		try {
			classPool.appendPathList(System.getProperty("java.class.path"));

			// make sure that MetricReporter is loaded
			classPool.get("com.chimpler.example.agentmetric.MetricReporter").getClass();
			classPool.appendClassPath(new LoaderClassPath(ClassLoader.getSystemClassLoader()));
		} catch (Exception e) {
			throw new RuntimeException(e);
		}
	}

	public byte[] transform(ClassLoader loader, String fullyQualifiedClassName, Class<?> classBeingRedefined,
			ProtectionDomain protectionDomain, byte[] classBytes) throws IllegalClassFormatException {
		String className = fullyQualifiedClassName.replace("/", ".");

		classPool.appendClassPath(new ByteArrayClassPath(className, classBytes));

		try {
			CtClass ctClass = classPool.get(className);
			if (ctClass.isFrozen()) {
				logger.debug("Skip class {}: is frozen", className);
				return null;
			}

			if (ctClass.isPrimitive() || ctClass.isArray() || ctClass.isAnnotation()
					|| ctClass.isEnum() || ctClass.isInterface()) {
				logger.debug("Skip class {}: not a class", className);
				return null
			}
			boolean isClassModified = false;
			for(CtMethod method: ctClass.getDeclaredMethods()) {
				// if method is annotated, add the code to measure the time
				if (method.hasAnnotation(Measured.class)) {
				try {
					if (method.getMethodInfo().getCodeAttribute() == null) {
						logger.debug("Skip method " + method.getLongName());
						continue;
					}
					logger.debug("Instrumenting method " + method.getLongName());
					method.addLocalVariable("__metricStartTime", CtClass.longType);
					method.insertBefore("__metricStartTime = System.currentTimeMillis();");
					String metricName = ctClass.getName() + "." + method.getName();
					method.insertAfter("com.chimpler.example.agentmetric.MetricReporter.reportTime(\""
						+ metricName + "\", System.currentTimeMillis() - __metricStartTime);");
					isClassModified = true;
				} catch (Exception e) {
					logger.warn("Skipping instrumentation of method {}: {}", method.getName(), e.getMessage());
				}
			}
		}
		if (isClassModified) {
			return ctClass.toBytecode();
		}
	} catch (Exception e) {
		logger.debug("Skip class {}: ", className, e.getMessage());
	}
	return classBytes;
}
}

Every method annotated with @Measured are rewritten by adding code instruction before and after the original method code:

long __metricStartTime = System.currentTimeMillis();
// original method code
com.chimpler.example.agentmetric.MetricReporter.reportTime(metricName, System.currentTimeMillis() - __metricStartTime);

The last step is to define the file MANIFEST.MF and package the agent.

Content of the file MANIFEST.MF:

premain-Class: com.chimpler.example.agentmetric.MetricAgent

Package the application:

cd agent
mvn clean compile install assembly:single

Running a java program with the java agent

To test the agent, we are going to run the agent on the following program:

public class RunExample {
	private Random random = new Random();

	public RunExample() {

	}

	@Measured
	public void doSleep() {
		try {
			Thread.sleep(10000);
		} catch (InterruptedException e) {
		}
	}

	@Measured
	private void doTask() {
		try {
			Thread.sleep(random.nextInt(1000));
		} catch (InterruptedException e) {
		}
	}

	@Measured
	public void doWork() {
		for(int i = 0 ; i < random.nextInt(10) ; i++) {
			doTask();
		}
	}

	public static void main(String[] args) {
		RunExample test = new RunExample();
		while(true) {
			test.doWork();
			test.doSleep();
		}
	}
}

This program indefinitely calls the method doWork and doSleep.

To build the application:

cd agent-test
mvn clean assembly:single

Now you can run the application with the agent:
If you don’t have graphite installed on your computer:

java -javaagent:../agent/target/metric-agent-jar-with-dependencies.jar \
        -cp target/agent-test-jar-with-dependencies.jar \
        com.chimpler.example.agentmetric.example.RunExample

If you have graphite installed:

java -javaagent:../agent/target/metric-agent-jar-with-dependencies.jar=graphite.host:localhost,graphite.port:2003,graphite.prefix:test \
        -cp target/agent-test-jar-with-dependencies.jar \
        com.chimpler.example.agentmetric.example.RunExample

You can see the metrics through JMX by using jconsole:

jconsole

agent_metric_jmx

And also on graphite:
agent_graphite

As expected the average time execution of the method doTask is 500ms.

Coda hale metrics computes several metrics, below are the main measures:

  • Count: number of times the method is called since the program starts
  • Mean: mean execution time of the method
  • Min: min execution time of the method
  • Max: max execution time of the method
  • MeanRate: number of times per minute the method was invoked

Conclusion

We have seen in this post how to write a simple agent to measure the number of times a method is invoked and the execution time the method.
The agent can be improved to log more informations, for instance log SQL queries executed through JDBC, log rest calls by users, …
Those metrics can be used to compare the performance of a system after each release deployment or to trigger alerts when the rate or execution time of a method goes above a threshold (by using nagios for instance).

Using the Mahout Naive Bayes Classifier to automatically classify Twitter messages (part 2: distribute classification with hadoop)

elephant-drinking

In this post, we are going to categorize the tweets by distributing the classification on the hadoop cluster. It can make the classification faster if there is a huge number of tweets to classify.

To go through this tutorial you would need to have run the commands in the post Using the Mahout Naive Bayes Classifier to automatically classify Twitter messages.

To distribute the classification on the hadoop nodes, we are going to define a mapreduce job:

  • the csv containing the tweets to classify is split into several chunks
  • each chunk is sent to the hadoop node that will process it by running the map class
  • the map class loads the naive bayes model and some document/word frequency into memory
  • for each tweet of the chunk, it computes the best matching category. The result is written in the output file. We are not using a reducer class as we don’t need to do aggregations.

To download the code used in this post, you can fetch it from github:

$ git clone https://github.com/fredang/mahout-naive-bayes-example2.git

To compile the project:

$ mvn clean package assembly:single

Read more of this post

Installing and comparing MySQL/MariaDB, MongoDB, Vertica, Hive and Impala (Part 1)

impalaA common thing a data analyst does in his day to day job is to run aggregations of data by generally summing and averaging columns using different filters. When tables start to grow to hundreds of millions or billions of rows, these operations become extremely expensive and the choice of a database engine is crucial. Indeed, the more queries an analyst can run during the day, the better he can be at understanding the data.

In this post, we’re going to install 5 popular databases on Linux Ubuntu (12.04):

  • MySQL / MariaDB 10.0: Row based database
  • MongoDB 2.4: NoSQL database
  • Vertica Community Edition 6: Columnar database (similar to Infobright, InfiniDB, …)
  • Hive 0.10: Datawarehouse built on top of HDFS using Map/Reduce
  • Impala 1.0:  Database implemented on top of HDFS (compatible with Hive) based on Dremel that can use different data formats (raw CSV format, Parquet columnar format, …)

Then we’ll provide some scripts to populate them with some test data, run some simple aggregation queries and measure the response time. The tests will be run on only one box without any tuning using a relatively small dataset (160 million rows) but we’re planning on running more thorough tests in the cloud later with much bigger datasets (billions of rows). This is just to give a general idea on the performance of each of the database.
Read more of this post

Finding association rules with Mahout Frequent Pattern Mining

s_elephant_monkeyAssociation Rule Learning is a method to find relations between variables in a database. For instance, using shopping receipts, we can find association between items: bread is often purchased with peanut butter or chips and beer are often bought together. In this post, we are going to use the Mahout Frequent Pattern Mining implementation to find the associations between items using a list of shopping transactions. For details on the algorithms(apriori and fpgrowth) used to find frequent patterns, you can look at “The comparative study of apriori and FP-growth algorithm” from Deepti Pawar.

EDIT 2014-01-08: updated link to data sample marketbasket.csv (old link was dead). Corrected lift computation. Thanks Felipe F. for pointing the error in the formula.
Read more of this post

Generating EigenFaces with Mahout SVD to recognize person faces

catnmouse

In this tutorial, we are going to describe how to generate and use eigenfaces to recognize people faces.
Eigenfaces are a set of eigenvectors derived from the covariance matrix of the probability distribution of the high-dimensional vector space of possible faces of human beings. It can be used to identify a face on a picture from a person face database very quickly. In this post, we’ll not give much details on the mathematical aspects but if you are interested on those, you can look at the excellent post Face Recognition using Eigenfaces and Distance Classifiers: A Tutorial from the Onionesque Reality Blog.

Read more of this post

Playing with Apache Hive and SOLR

As described in a previous post, Apache SOLR can perform very well to provide low latency analytics. Data logs can be pre-aggregated using Hive and then synced to SOLR. To this end, we developed a simple Storage Handler for SOLR so that data can be read and written to SOLR transparently using an external table.

We will show in this post how to install our SOLR storage handler and then run a simple example where we sync some data from Hive to SOLR.
Read more of this post

Using the Mahout Naive Bayes Classifier to automatically classify Twitter messages

mahout2Classification algorithms can be used to automatically classify documents, images, implement spam filters and in many other domains. In this tutorial we are going to use Mahout to classify tweets using the Naive Bayes Classifier. The algorithm works by using a training set which is a set of documents already associated to a category. Using this set, the classifier determines for each word, the probability that it makes a document belong to each of the considered categories. To compute the probability that a document belongs to a category, it multiplies together the individual probability of each of its word in this category.  The category with the highest probability is the one the document is most likely to belong to.

To get more details on how the Naive Bayes Classifier is implemented, you can look at the mahout wiki page.

This tutorial will give you a step-by-step description on how to create a training set, train the Naive Bayes classifier and then use it to classify new tweets.

Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 104 other followers

%d bloggers like this: