Though one of the most essential tools for determining an ancient object’s age, carbon dating might not be as accurate as we once thought.
When news is announced on the discovery of an archaeological find, we often hear about how the age of the sample was determined using radiocarbon dating, otherwise simply known as carbon dating.
Deemed the gold standard of archaeology, the method was developed in the late 1940s and is based on the idea that radiocarbon (carbon 14) is being constantly created in the atmosphere by cosmic rays which then combine with atmospheric oxygen to form CO2, which is then incorporated into plants during photosynthesis.
When the plant or animal that consumed the foliage dies, it stops exchanging carbon with the environment and from there on in it is simply a case of measuring how much carbon 14 has been emitted, giving its age.
But new research conducted by Cornell University could be about to throw the field of archaeology on its head with the claim that there could be a number of inaccuracies in commonly accepted carbon dating standards.
If this is true, then many of our established historical timelines are thrown into question, potentially needing a re-write of the history books.
In a paper published to the Proceedings of the National Academy of Sciences, the team led by archaeologist Stuart Manning identified variations in the carbon 14 cycle at certain periods of time throwing off timelines by as much as 20 years.
The possible reason for this, the team believes, could be due to climatic conditions in our distant past.