The word zero is attributed to many ancient cultures and is most likely a corrupt version of the Arabic spelling safira which meant it was empty. Coming down from the French cipher and zero, it finally became the word zero. The invention of zero as a number and not just for place position is generally attributed to the ancient Indians. By the 9th century AD, mathematics calculations that included both division and multiplication used the number zero. Pingala, who was an Indian scholar, also used zero for binary numbers, which was quite similar to Morse code.
The rules for using zero are now credited with Brahmagupta who was also an ancient Indian Scholar. The book treatise that he wrote known as “Brahmaputra Sidhantha” which translates to ‘The opening of the Universe. This treatise was written in 628AD. Along with rules for zero, the great scholar and mathematician had also laid down rules for negative numbers and algebra.
By the 6th Century, zero was being used as a means to signify a position. The earliest mention of this arrangement has been discovered in the texts ‘Lokavibhaga’ which dates back to the 5th century. By the 11th century, the concept of using zero in the decimal system which used the base 10 finally reached the shores of Europe. This was brought to the European continent by Spanish Muslims, known as the Moors who also bought their knowledge of astronomy and this is the reason, that they are referred as the Arabic numerals. Fibonacci was really instrumental in using the system in Europe and using it for common mathematic problem solving.
Zero now plays a significant role in computing and also other streams of sciences where it can have special significance.