Articles in English

Drop test physics III

Tuomas Pöysti 2021

In the previous part, I played with the data (force vs time) from a single drop test and saw what I could do using the concept of impulse. It did not much to help understanding how energy is dissipated during a small fall on a static lanyard, but was fun, though. Now it is time to get serious (ha).

Derivatives and integrals

A bit more high school physics: First there is location or displacement. It tells where an object is. Velocity is the rate of change in displacement; it tells how fast the displacement is changing – or how fast the object is moving, as we tend to say. Similarly, acceleration is the rate of change in velocity.

To be exact:

  • Velocity is the derivative of displacement over time
  • Acceleration is the derivative of velocity over time
  • Acceleration is the second derivative of displacement over time

And the other way around:

  • Displacement is the integral of velocity over time
  • Velocity is the integral of acceleration over time.

That is, if we have data of displacement over time, we are able to calculate the speed over time, even if it did not remain constant.

And vice versa, if we have data of acceleration over time, we can calculate velocity over time. And using data of velocity, we can calculate displacement.

But there’s a stumbling block in the last ones, the ones using integral. Namely, we cannot actually calculate an object’s displacement using only data of it’s velocity, if we don’t know the displacement it started with – or any displacement along the way. The same goes with velocity and acceleration. We’ll see!


What useful information we have, then? In the previous part we ended up with data of ma, my mass multiplied by acceleration over time. If we (simply using a spreadsheet application – I use Google sheets) divide those numbers by my mass (still 82 kg, there were the holidays and all), we get:

Notice, by the way, that without any specific reason I moved the timescale so that 0 is at the maximum force peak.

Now we know how (the center of gravity of) my body accelerated along the vertical axis during the test. Positive acceleration is upwards. In the very beginning of the dataset, the acceleration seems to be a little less than 10 downwards. It is, of course, the gravitational acceleration, meaning that I was at free fall.

Remember “G forces” from Top Gun? At the top of the curve, I briefly experienced 31.5 m/s^2 or 3.2g. Not a big deal compared to constant 9g the fighter and aerobatics pilots take… But to my defense, I have to add: This data is about ma only, the mg part has been subtracted. That is, we actually need to add the Earth’s gravity if we want to see who is the tough guy! Well, 4.2g for a fraction of a second is not very impressive, either. And it felt like a lead fall.

I cropped the data a bit hastily, so the very next thing we see is the lanyard starting to catch. As the curve reaches the horizontal axis, the lanyard, harness and my bottom have tensioned enough for the force to equal my body weight.


The next step is to integrate the acceleration data over time to get the velocity of my body’s center of gravity. It sounds way fancier and harder than it is, especially when I again used the most basic numerical way of doing it.

Using the real life example: the first three acceleration values in the dataset are (time and acceleration):

  1. @ 0.000: -9.81
  2. @ 0.002: -9.57
  3. @ 0.004: -9.57

Now, let’s assume the initial velocity is 0. The simple numerical integration simpy assumes that after the first row of dataset, at 0.002 seconds, the velocity is

(0 + 0.002 * -9.81) m/s

That is, the first row took the initial velocity and added what ever the current acceleration does in the said time, 0.002 seconds. So, adding velocities as third numbers (rounded for clarity):

  1. @ 0.000: -9.81 0
  2. @ 0.002: -9.57 -0.02
  3. @ 0.004: -9.57 -0.04

As seen, the velocity starts to drop (or grow in a negative direction). And it of course drops between the second and third row, too, although the acceleration does not change, since any non-zero acceleration means that velocity changes. We end up with this curve:

It does not make any sense. Why does the velocity seem to settle to somewhere around 1.4 m/s? It’s of course because of the initial velocity. We just assumed it to be 0, but we don’t actually know what my velocity was at the beginning of the dataset.

Luckily there’s a good way of figuring the correct initial velocity, though! The right end of the curve should settle towards zero, right? Just by trial and error, value -1.43 can be found, and applying it as the initial velocity instead of 0 yields:

Notice where the curve crosses the vertical axis (the point in time when the force had it’s maximum value). This is typical to functions and their integrals, they will never peak at same points. As all derivatives, acceleration is the rate of change in velocity, so when the acceleration (and thus force) peaks, the velocity is not zero but changing at the highest rate. Let’s see more of this when we get to…


I got to say, I was not convinced that the numerical method would actually work using this data, when I first tried it. But doing the second integration cycle repsectively, and finding the correct initial displacement (0.162 in this case) yielded a dataset where the zero displacement is defined as the final one:

The intial value is the displacement of my center of gravity in the beginning of the dataset. That is, I seemed to have been falling 16.2 cm above the final level, and as we learned in the previous section, my velocity was -1.43 m/s. I don’t know about you, but I think this is pretty amazing considering we only knew the gravitational acceleration and could measure force against time to start with! Of course, I have not yet repeated these studies, and I have no secondary ways of confirming these values.

To show how sensitive this numerical method is to inaccurate data, let’s pretend for a while that I wanted to fake my mass a bit downwards, to 81.7 kg instead of 82 kg:

The error clearly accumulates towards the end, which is easy to understand if the method is considered. Every value is based on the preceding one, and there are over 500 of them in this short sample. Luckily this study has solid ways of calibrating the values, since we know exactly how the tails of the curves should look like!

To be continued…

The results are discussed in the fourth part.