The idea of using on-farm trials to gather data about the performance of any crop input has been in place even before the advent of modern science. Farmers tend to test anything new on small portions of their fields and compare with their common practices before adopting a new practice on their whole farm.
This practice may work well for some crop inputs and some farmers, for example, when only two treatments need to be compared, the fields are relatively homogenous, and the new treatment yields 10% more than the farmer’s common practice. In this situation, even if small errors in the trial planning, data collection, or data analysis occur, the experiment conclusion will be the same and the farmer will be pleased in the next year when he sees the results of applying the winner treatment to the whole farm. However, as agricultural production systems get more complex, more detailed methods are needed.
The number of inputs applied during a crop season increases every year. This is driven by the occurrence of new pests and diseases, the availability of new fertilizer sources, and the advances in plant physiology. This more intensive way of farming has been responsible for the yield increases attained in the last decades, but has also raised production costs. Translating this fact to the context of this article, it means that the yield difference accountable by each crop input is smaller and therefore a higher level of accuracy in the whole experimentation process must be attained to secure that the right conclusions can be drawn.
Statistically speaking, this means more levels of treatments are needed to account for interactions, and more replications to account for field spatial variability and reduce uncertainties. This imposes some challenges for farmers, which has forced most of the agronomy experimentation to move from fields to the more controlled environments of research facilities. With the advent of precision agriculture and the development of new agricultural machines, some of the challenges of large scale on-farm trials were overcome and the interest in this subject was renewed. But this happened more than 20 years ago and this important tool for improving agronomic knowledge and the whole production system is not widely adopted yet. So, what makes me think that the next 20 years will be completely different in that context?
It is worth mentioning some previous articles posted on PrecisionAg.com, which report USDA funding the University of Illinois research that focuses on using precision technology in on-farm field trials to enable data-intensive fertilizer management, and Syngenta investing in Premier Crop Systems to accelerate the development of Enhanced Learning Blocks. The great interest among public and private sectors to advance this topic is a good indicator of its importance and poses a fundamental step to make it accessible to more farmers.
Another important aspect that is evolving is data sharing across networks of farmers. The results obtained in projects like Farmers Business Network and FarmLogs Research Network are examples of what can be obtained with the collaboration of farmers. In these circumstances, the large number of observations in different fields can compensate for the errors associated with data collection. This is specifically true when production systems are similar and standard protocols can be easily implemented.
One of the reasons precision agriculture was thought to be a game changer in the use of on-farm trials more than 20 years ago was because it would be much more simple and cheaper to implement treatments, keep track of everything during the crop season, and analyze the results. You would just need to make a variable rate prescription in seconds, then collect the yield map data with the combine while harvesting, and then draw the conclusions.
In practice, a lot of time needed to be spent planning the treatment locations, accounting for machine capabilities, converting data to an acceptable format, checking and calibrating sensors, dealing with lost data, converting yield data, cleaning the raw data, and analyzing the results. Some of these laborious and time-consuming tasks have been automated by precision agriculture software and the new digital agriculture platforms we have seen in the last years, but there is still room for a lot of improvements and generalizations. One important issue related to this problem is data standardization and ease of access, and some important developments towards resolving it having been made in the last months by the AgGateway initiative.
During a presentation I made last month at the Seminar of Precision Agriculture, which was held in Piracicaba – SP, on the topic of improving agronomic recommendations in management zones, the audience questions made it clear that software and methodologies to analyze this data is still a major barrier to the widespread adoption of on-farm trials in Brazil. At Smart Agri we have been providing consultancy for farmers, service providers, and crop input companies on the use of on-farm trials. Although we can overcome the challenges mentioned above with skilled professionals, the quality of data collection has been the main issue. Most of the data we get are from large farms, and when you have a fleet of combines, it is much more difficult to assure data accuracy. Another important thing to mention is the diversity of production systems and the intensive land use, ranging from one to three crops a year in the same area. The previous crop is likely to influence the results, and the hurry to seed the next crop will be the priority, not the data collection.
Although there are many obstacles to the adoption of on-farm trials, we are excited with the potential observed in some successful histories. The maps shown are an example of this, illustrating the use of on-farm trials to evaluate cotton response to nitrogen application. Cotton is one important crop in Brazil, which is mainly cultivated by “high tech” farmers. The objective of these trials is to compensate for spatial and temporal variations in the optimum N rate to be applied. The total N amount planned is usually split in four applications: 10% at seeding, 30% at 20 days after seeding (DAS), 30% at 40 DAS, and 30% at 60 DAS. We use a fixed rate at seeding, and implement the on-farm trials at 20 DAS, using variable rate based on historical data for the rest of the field. In the photo below, each square plot has about 2 ha and the whole field has 700 ha. The trials are replicated in each management zone.
We then start to monitor crop development and adjust the next two applications based on the crop response. Although it may seem similar to the use of reference strips to determine nitrogen fertilizer requirements in other crops such as corn and wheat, there is a main difference. The relationship between cotton vigor and cotton yield is not linear. Because of that, the rate to be applied must not limit cotton yield, but it is likely to still limit cotton vegetative development, which means that some crop scouting is needed to validate the model. We are also investing in some technologies similar to the ones in use for plant phenotyping to make data collection fully automated. With this use of on-farm trials we can get cotton yield improvements of 5% to 10% and nitrogen savings of 20% to 30%. Future technological developments will allow the size of the plots to be reduced and the number of replications to be increased, generating more robust results.