Logarithm Calculator
Calculate logarithms in any base — natural log (ln), log base 10, log base 2, and custom bases. Includes log rules reference, powers table, and cross-base comparison chart.
A logarithm is the mathematical operation that determines how many times a specific number, known as the base, must be multiplied by itself to reach another specific value. It serves as the fundamental inverse of exponentiation, allowing mathematicians, scientists, and engineers to solve complex equations where the unknown variable is an exponent, and to compress astronomically large or microscopically small numbers into manageable, human-readable scales. By mastering logarithms, you will unlock the ability to comprehend everything from the compounding growth of financial investments to the exponential decay of radioactive isotopes and the vast magnitude of earthquakes.
What It Is and Why It Matters
To understand a logarithm, you must first understand exponentiation. When you raise a number to a power, such as calculating 10 to the power of 3 ($10^3$), you are multiplying 10 by itself three times ($10 \times 10 \times 10$) to get 1,000. A logarithm simply asks this question in reverse: "To what power must I raise the base (10) to get the result (1,000)?" The answer is 3. Therefore, the logarithm of 1,000 with a base of 10 is exactly 3. This concept exists because addition and multiplication are not sufficient to handle scenarios where growth or decay happens multiplicatively over time. Just as subtraction was invented to undo addition, and division was invented to undo multiplication, the logarithm was invented to undo exponentiation.
The existence of logarithms solves a massive problem in human comprehension and mathematical calculation: scale. The universe operates on scales that are far too vast or too minuscule for standard linear arithmetic. For example, the distance to the nearest star is measured in trillions of miles, while the diameter of an atom is measured in fractions of a nanometer. If we tried to graph these values on a standard linear scale, the paper would need to be millions of miles long. Logarithms allow us to compress these massive ranges into a tight, manageable scale where every step represents a multiplication factor rather than an addition factor. Anyone working in finance, biology, physics, computer science, or engineering relies on logarithms daily to model populations, calculate algorithmic efficiency, determine chemical acidity, and measure acoustic energy. Without logarithms, modern science and quantitative finance would mathematically grind to a halt.
History and Origin
The concept of the logarithm was independently conceived in the early 17th century by two brilliant mathematical minds: the Scottish baron John Napier and the Swiss clockmaker Joost Bürgi. John Napier is universally credited as the primary inventor, publishing his seminal work Mirifici Logarithmorum Canonis Descriptio (Description of the Admirable Table of Logarithms) in 1614 after nearly two decades of grueling manual calculations. Napier’s primary motivation was to simplify the incredibly tedious and error-prone calculations required in astronomy and celestial navigation. Before computers or calculators, astronomers had to multiply massive trigonometric figures by hand, a process that could take days and was highly susceptible to simple arithmetic errors. Napier realized that by mapping a geometric progression (where numbers multiply) to an arithmetic progression (where numbers add), he could turn complex multiplication and division problems into simple addition and subtraction problems.
Shortly after Napier's publication, the English mathematician Henry Briggs recognized the immense utility of Napier's invention but saw room for improvement. In 1615, Briggs traveled to Edinburgh to meet Napier, and together they agreed to alter the base of the logarithms to 10, creating what we now call "common logarithms." Briggs subsequently calculated and published extensive tables of these base-10 logarithms, which became an instant sensation across Europe. For over 350 years, until the invention of the electronic pocket calculator in the 1970s, scientists and engineers relied entirely on printed logarithm tables and the slide rule—a mechanical calculating device built entirely on logarithmic scales. In the 18th century, the legendary Swiss mathematician Leonhard Euler formally connected logarithms to exponential functions, solidifying their place in modern calculus and giving rise to the natural logarithm based on the constant $e$. The history of the logarithm is the history of human computational acceleration; it was the software update that allowed the scientific revolution to calculate the cosmos.
Key Concepts and Terminology
To confidently navigate the world of logarithms, you must master the specific vocabulary used to describe their components and behaviors. The Base is the foundational number that is being multiplied by itself. In the expression $\log_2(8) = 3$, the number 2 is the base. The base dictates the fundamental "step size" of the exponential growth; a base of 2 implies doubling, while a base of 10 implies growing by a factor of ten. The Argument is the target number you are trying to reach. In the previous example, 8 is the argument. It represents the final outcome of the exponentiation process. The Logarithm (or the answer to the equation) is the exponent itself. In our example, 3 is the logarithm, meaning it is the power to which the base must be raised to produce the argument.
When discussing logarithms, you will also encounter the terms Mantissa and Characteristic, which are historical terms still relevant in certain mathematical contexts. The characteristic is the integer (whole number) part of a logarithm, which tells you the order of magnitude of the original number. The mantissa is the fractional (decimal) part of the logarithm, which provides the precise digits of the number regardless of the decimal point's location. For example, the base-10 logarithm of 250 is approximately 2.3979. Here, 2 is the characteristic (indicating the number is in the hundreds, between $10^2$ and $10^3$), and 0.3979 is the mantissa. Furthermore, the term Antilogarithm refers to the inverse process of taking a logarithm. If you have the logarithm and the base, finding the antilogarithm means calculating the original argument. Taking the antilogarithm of 3 with a base of 10 means calculating $10^3$, which returns you to 1,000. Understanding this vocabulary ensures you can read mathematical formulas, use computational tools, and communicate complex relationships with absolute precision.
Types, Variations, and Methods
While a logarithm can technically have any positive number as its base (excluding 1), the mathematical and scientific communities rely almost exclusively on three specific variations. The first is the Common Logarithm, which uses a base of 10 and is typically written simply as $\log(x)$ without an explicit base. Common logarithms are deeply tied to our base-10 decimal number system. They excel in engineering, chemistry, and earth sciences because they immediately tell you the order of magnitude of a number. If a common logarithm evaluates to 5.4, you instantly know the original number is between 100,000 ($10^5$) and 1,000,000 ($10^6$).
The second, and arguably most important in advanced mathematics, is the Natural Logarithm. The natural logarithm uses the irrational mathematical constant $e$ (approximately 2.71828) as its base. It is universally denoted as $\ln(x)$. The constant $e$ represents the absolute limit of continuous compounding growth. Therefore, natural logarithms are the absolute standard in calculus, physics, and financial mathematics whenever dealing with continuous rates of change—such as a population reproducing continuously or an investment compounding every millisecond. The natural logarithm of a number tells you the time needed to reach a certain level of growth under continuous compounding.
The third major variation is the Binary Logarithm, which uses a base of 2 and is frequently denoted as $\log_2(x)$ or sometimes $\lg(x)$. Binary logarithms are the lifeblood of computer science, information theory, and digital photography. Because computers operate on a binary system of ones and zeros (transistors being either on or off), the binary logarithm perfectly models digital states. It is used to calculate how many bits are required to encode a specific number of possibilities, or to determine the maximum number of steps required to find an item in a sorted database using a binary search algorithm. While these three bases dominate, you will occasionally encounter custom bases in highly specific scenarios, such as using base 12 in music theory to calculate the frequency ratios between semitones in an octave.
How It Works — Step by Step
The fundamental mechanics of a logarithm are defined by a strict, unbreakable relationship with exponentiation. The core formula is: if $b^y = x$, then $\log_b(x) = y$. Here, $b$ is the base, $y$ is the exponent (the logarithm), and $x$ is the argument. Let us walk through a complete, manual calculation to prove this relationship. Suppose you are asked to evaluate $\log_5(625)$. You must ask yourself, "To what power must I raise 5 to get 625?" You begin multiplying the base by itself: $5^1 = 5$. Next, $5^2 = 5 \times 5 = 25$. Next, $5^3 = 25 \times 5 = 125$. Finally, $5^4 = 125 \times 5 = 625$. Because it took exactly 4 multiplications of the base to reach the argument, the answer is definitively 4. Thus, $\log_5(625) = 4$.
However, what happens when the argument is not a perfect power of the base? Suppose you need to calculate $\log_3(50)$. We know that $3^3 = 27$ and $3^4 = 81$. Since 50 lies between 27 and 81, the logarithm must be a decimal between 3 and 4. To solve this precisely without a dedicated base-3 calculator, we use the Change of Base Formula, which is one of the most powerful tools in logarithm mechanics. The formula states that $\log_b(x) = \frac{\log_k(x)}{\log_k(b)}$, where $k$ is any new base you choose. In practice, you choose base 10 or base $e$ because those are programmed into standard calculators.
Let us execute the Change of Base Formula for $\log_3(50)$ using base 10. Step 1: Set up the fraction: $\frac{\log_{10}(50)}{\log_{10}(3)}$. Step 2: Calculate the numerator. The base-10 log of 50 is approximately 1.69897. Step 3: Calculate the denominator. The base-10 log of 3 is approximately 0.47712. Step 4: Divide the numerator by the denominator: $1.69897 / 0.47712 \approx 3.5609$. Therefore, $\log_3(50) \approx 3.5609$. You can verify this by raising 3 to the power of 3.5609, which will yield approximately 50. This step-by-step process allows you to evaluate any logarithm, of any base, for any positive argument.
The Laws of Logarithms
To manipulate complex equations involving exponents, you must memorize and apply the three fundamental Laws of Logarithms. These laws are direct translations of the laws of exponents. Because multiplying exponents with the same base requires adding their powers ($x^2 \times x^3 = x^5$), logarithms have a corresponding rule. This is the Product Rule, which states: $\log_b(M \times N) = \log_b(M) + \log_b(N)$. This means the logarithm of a multiplied product is equal to the sum of the individual logarithms. For example, $\log_{10}(100 \times 1000)$ can be split into $\log_{10}(100) + \log_{10}(1000)$. Since $\log_{10}(100) = 2$ and $\log_{10}(1000) = 3$, the sum is 5. Sure enough, $100 \times 1000 = 100,000$, and $\log_{10}(100,000)$ is exactly 5.
The second law is the Quotient Rule, which handles division. It states: $\log_b(M / N) = \log_b(M) - \log_b(N)$. Just as dividing exponents requires subtraction ($x^5 / x^2 = x^3$), the logarithm of a quotient is the difference of the logarithms. If you need to calculate $\log_2(64 / 8)$, you can rewrite it as $\log_2(64) - \log_2(8)$. We know $\log_2(64) = 6$ and $\log_2(8) = 3$. Subtracting 3 from 6 gives 3. Verifying this, $64 / 8 = 8$, and $\log_2(8)$ is indeed 3. This rule is particularly useful for simplifying complex algebraic fractions.
The third and most powerful law is the Power Rule, which states: $\log_b(M^p) = p \times \log_b(M)$. This rule allows you to take an exponent from inside the logarithm and move it to the front as a multiplier. This is the exact mechanism that allows us to solve for unknown variables trapped in exponents. For example, if you have the equation $2^x = 10$, you can take the base-10 logarithm of both sides: $\log_{10}(2^x) = \log_{10}(10)$. Using the power rule, you pull the $x$ down to the front: $x \times \log_{10}(2) = 1$. Now, simple division isolates the variable: $x = 1 / \log_{10}(2)$. Since $\log_{10}(2) \approx 0.301$, $x \approx 3.322$. Without the Power Rule, solving exponential equations analytically would be impossible.
Real-World Examples and Applications
Logarithms are not mere abstract mathematical curiosities; they are the structural framework for how we measure extreme variations in the physical world. Consider the Richter Scale, which measures the magnitude of earthquakes. The Richter scale is a base-10 logarithmic scale. This means that an earthquake measuring 6.0 on the Richter scale does not have 20% more shaking amplitude than a 5.0 earthquake; it has exactly 10 times the shaking amplitude. Furthermore, because of the specific energy formula used, a 6.0 earthquake releases approximately 31.6 times more actual destructive energy than a 5.0. When a devastating 9.0 magnitude earthquake strikes, it is releasing one million times more energy than a minor 5.0 tremor. Without logarithms, the news would have to report earthquake energies in billions of joules, which is entirely incomprehensible to the general public.
Another ubiquitous application is the Decibel (dB) scale used to measure sound intensity. Human hearing is incredibly dynamic; the loudest sound we can hear without immediate damage is trillions of times more powerful than the quietest sound we can detect. To make this range understandable, acousticians use a base-10 logarithmic scale multiplied by 10. The formula for sound intensity in decibels is $L = 10 \times \log_{10}(I / I_0)$, where $I_0$ is the threshold of human hearing. Because of this logarithmic relationship, a normal conversation at 60 dB carries 1,000 times more sound energy than a whisper at 30 dB. If you attend a rock concert at 110 dB, the sound waves are hitting your eardrums with 100,000 times more acoustic power than the 60 dB conversation.
In chemistry, the pH scale is used to determine how acidic or basic a substance is. The pH formula is literally a logarithm: $\text{pH} = -\log_{10}([H^+])$, where $[H^+]$ is the concentration of hydrogen ions in moles per liter. Pure water has a hydrogen ion concentration of $10^{-7}$, so its pH is exactly 7. Battery acid has a concentration of $10^{-0}$, giving it a pH of 0. Because it is a logarithmic scale, a substance with a pH of 4 is 10 times more acidic than a substance with a pH of 5, and 100 times more acidic than a pH of 6. Finally, in finance, natural logarithms are used to calculate compound interest over time. If you invest $10,000 at a 5% continuous interest rate and want to know exactly how long it will take to reach $25,000, you use the formula $t = \ln(A/P) / r$. Plugging in the numbers: $\ln(25000/10000) / 0.05 = \ln(2.5) / 0.05 \approx 0.916 / 0.05 = 18.32$ years. Logarithms turn complex exponential growth into simple, actionable timelines.
Common Mistakes and Misconceptions
When novices begin working with logarithms, they almost universally fall victim to a specific set of algebraic blunders. The most pervasive mistake is the false "Logarithm of a Sum" assumption. Students frequently assume that $\log_b(x + y)$ is equal to $\log_b(x) + \log_b(y)$. This is categorically false. Logarithms distribute over multiplication (turning it into addition), but they absolutely do not distribute over addition. There is no simplified algebraic expansion for $\log_b(x + y)$. For example, $\log_{10}(10 + 100)$ is $\log_{10}(110)$, which is roughly 2.04. But $\log_{10}(10) + \log_{10}(100)$ is $1 + 2 = 3$. The numbers do not match, proving the fallacy.
Another profound misconception involves the domain of logarithmic functions. Beginners frequently attempt to calculate the logarithm of a negative number or zero, such as $\log_{10}(-100)$. In the realm of real numbers, this is strictly undefined. A positive base raised to any real exponent will always produce a positive result. You cannot multiply 10 by itself any number of times (positive, negative, or fractional) and arrive at -100 or 0. While the logarithm of a negative number can be calculated using complex numbers and imaginary numbers (resulting in a complex logarithm), it is an error in standard algebra, calculus, and real-world modeling. Attempting to input $\log(0)$ into a calculator will result in a domain error, as the true mathematical limit of $\log(x)$ as $x$ approaches 0 is negative infinity.
Finally, a major source of confusion stems from notation, specifically the unwritten base. In mathematics, depending on the context and the country, simply writing $\log(x)$ can mean completely different things. In high school algebra in the United States, $\log(x)$ almost always implies a base of 10. However, in advanced mathematics, university-level calculus, and programming languages like C++, Python, and JavaScript, the log(x) function explicitly calculates the natural logarithm (base $e$). If a developer assumes Math.log(100) in JavaScript will return 2 (base 10), they will be shocked when it returns 4.605 (base $e$). Failing to verify the assumed base of the notation or the programming environment causes catastrophic calculation errors in engineering and software development.
Best Practices and Expert Strategies
Professionals who utilize logarithms daily rely on a set of standardized strategies to ensure accuracy and efficiency. The primary best practice is to always convert exponential equations to a common base before attempting to solve them. If an engineer is faced with an equation like $3^{x+1} = 7^{2x}$, the expert strategy is to immediately take the natural logarithm ($\ln$) of both sides. While any base works mathematically, taking the natural log is the universal standard because it aligns perfectly with calculus operations (derivatives and integrals) that may be required later in the problem. By taking the natural log, the equation instantly becomes $(x+1)\ln(3) = (2x)\ln(7)$, transforming a difficult exponential problem into a basic linear algebra problem that can be solved by isolating $x$.
Another expert strategy is the mental estimation of logarithms for quick sanity checks. A seasoned professional knows the approximate base-10 logarithms of the numbers 2 through 9 by heart. For instance, knowing that $\log_{10}(2) \approx 0.301$ and $\log_{10}(3) \approx 0.477$ allows you to estimate almost any other value using the laws of logarithms. If you need to estimate $\log_{10}(6)$, you simply add the logs of 2 and 3 (since $2 \times 3 = 6$), yielding $0.301 + 0.477 = 0.778$. If a calculator outputs a result of 1.5 for $\log_{10}(6)$, the expert instantly knows a keystroke error occurred because the mental model proves the answer must be around 0.778. Developing this logarithmic intuition prevents blind reliance on digital tools.
Furthermore, data scientists and statisticians heavily rely on "log transformations" when dealing with highly skewed datasets. If a dataset contains values ranging from $10 to $10,000,000 (such as human wealth distribution), a standard linear regression model will fail because the massive outliers will distort the trendline. The expert best practice is to apply a logarithmic transformation to the entire dataset—replacing every value $x$ with $\log(x)$. This compresses the massive tail of the distribution, pulling extreme outliers closer to the median and transforming an exponential curve into a straight line. This practice is mandatory in econometrics, biology, and machine learning to normalize data and satisfy the assumptions of linear modeling algorithms.
Edge Cases, Limitations, and Pitfalls
While logarithms are incredibly powerful, they possess strict mathematical boundaries and edge cases that will break your calculations if ignored. The most rigid limitation is the definition of the base itself. The base of a logarithm must be a positive real number strictly greater than zero, and it cannot equal exactly 1. Why is base 1 forbidden? Consider the equation $\log_1(5) = y$. This translates to $1^y = 5$. Because the number 1 raised to any power will always remain 1, there is no possible exponent that can turn a 1 into a 5. The equation is mathematically nonsensical. Similarly, negative bases are highly problematic. If you attempted to use a base of -2, the function jumps wildly between positive and negative values, and for fractional exponents (like $1/2$, which represents a square root), it results in imaginary numbers. Therefore, the function $\log_b(x)$ is strictly limited to $b > 0$ and $b \neq 1$.
In the realm of computer science and numerical analysis, floating-point precision is a massive pitfall. Computers do not possess infinite memory, so they approximate irrational numbers (like the results of most logarithms) using a fixed number of decimal places. When calculating the logarithm of a number extremely close to 1, such as $\ln(1.0000000001)$, standard logarithmic functions in programming languages suffer from catastrophic cancellation, losing precision and returning highly inaccurate results. To circumvent this edge case, professional software libraries include a specific function called log1p(x), which computes $\ln(1 + x)$ using a Taylor series expansion to maintain perfect precision even when $x$ is microscopically small. Programmers who are unaware of this limitation will introduce silent, cascading errors into financial or scientific simulations.
Another limitation is that logarithmic scales completely obscure absolute differences in favor of relative differences. If you are looking at a logarithmic graph of a company's revenue, a jump from $10,000 to $100,000 looks identical on the y-axis to a jump from $1,000,000 to $10,000,000. Both represent a 10x increase, so they take up the same vertical space on the chart. While this is mathematically correct, it is a massive psychological pitfall for investors and executives who may misinterpret the graph, failing to realize that the absolute dollar increase in the second scenario ($9,000,000) is one hundred times larger than the first scenario ($90,000). Using logarithmic visual representations without explicit labeling and context is a common way statistics are manipulated to mislead novices.
Industry Standards and Benchmarks
The mathematical and scientific communities have established rigorous international standards for logarithmic notation to prevent catastrophic miscommunications between disciplines. The most authoritative standard is ISO 80000-2, published by the International Organization for Standardization, which explicitly defines the mathematical signs and symbols to be used in science and technology. According to ISO 80000-2, the natural logarithm (base $e$) must be denoted as $\ln(x)$. The common logarithm (base 10) must be denoted as $\lg(x)$. The binary logarithm (base 2) must be denoted as $\text{lb}(x)$. The notation $\log_b(x)$ is reserved for when the base $b$ is explicitly written out as a subscript.
Despite this clear international standard, industry-specific benchmarks often override it based on historical precedent. In higher mathematics and theoretical physics, the notation $\log(x)$ without a subscript universally implies the natural logarithm ($\ln$), because base 10 is considered an arbitrary artifact of human biology (having ten fingers), whereas base $e$ is a fundamental constant of the universe. Conversely, in engineering disciplines like electrical engineering and acoustics, $\log(x)$ universally implies base 10, because their benchmark scales (like decibels and Bode plots) are entirely constructed around powers of ten. In computer science, specifically in the analysis of algorithms (Big O notation), $\log(n)$ almost exclusively implies base 2, because algorithmic complexity is benchmarked against binary tree structures and halving operations. A professional must always benchmark their notation against the specific industry journal or programming language documentation they are working with.
Comparisons with Alternatives
When evaluating mathematical tools, it is crucial to understand why you would choose a logarithm over an alternative approach, such as taking a root or using simple linear division. Both roots and logarithms are inverses of exponentiation, but they solve for entirely different variables. Consider the equation $b^y = x$. If you know the exponent ($y$) and the result ($x$), but you need to find the base ($b$), you use a Root. For example, if $b^3 = 125$, you take the cube root of 125 to find that the base is 5. However, if you know the base ($b$) and the result ($x$), but you need to find the exponent ($y$), you must use a Logarithm. If $5^y = 125$, you take $\log_5(125)$ to find that the exponent is 3. Roots uncover the base; logarithms uncover the exponent. You cannot substitute one for the other.
Another alternative to logarithmic scaling is Linear Scaling. When tracking growth, a linear scale plots absolute values: 10, 20, 30, 40. A linear scale is superior when you are measuring additive growth, such as saving exactly $100 per month. The graph will show a perfect, easy-to-read straight line. However, if you apply a linear scale to exponential growth, such as a virus spreading where each infected person infects two more daily (1, 2, 4, 8, 16, 32...), the linear graph becomes useless. In the early days, the line looks completely flat, hiding the danger. In the later days, the line shoots straight up, going off the chart and making it impossible to read specific values. The logarithm is chosen over the linear alternative because it transforms exponential multiplication into visual addition. On a logarithmic chart, the virus spread (1, 10, 100, 1000) becomes equally spaced increments, resulting in a straight line that clearly reveals the true rate of infection. You choose logarithms over linear alternatives the moment your data is driven by multiplication rather than addition.
Frequently Asked Questions
Can a logarithm be a negative number? Yes, the result of a logarithm (the exponent) can absolutely be negative, even though the argument must be positive. A negative logarithm indicates that the base must be raised to a negative power, which mathematically represents a fraction or division. For example, $\log_{10}(0.01) = -2$. This is because $10^{-2}$ is mathematically equivalent to $1 / 10^2$, which equals $1 / 100$, or 0.01. Whenever the argument is between 0 and 1, the logarithm will always be negative.
What is the logarithm of 1? The logarithm of 1 is always exactly 0, regardless of what the base is (as long as the base is valid). Whether you are calculating $\log_{10}(1)$, $\ln(1)$, or $\log_{500}(1)$, the answer is 0. This is because of the fundamental rule of exponents: any non-zero number raised to the power of 0 equals 1. Therefore, the exponent required to turn any base into 1 is always 0.
What is the mathematical constant $e$, and why is it used? The constant $e$, also known as Euler's number, is an irrational number approximately equal to 2.71828. It is defined as the absolute mathematical limit of continuous compounding. If you have $1 earning 100% interest over one year, compounded infinitely every millisecond, you will not earn infinite money; you will earn exactly $2.71828. Because it represents the natural rate of continuous growth, it is the base of the natural logarithm ($\ln$), making it the optimal base for calculating time, decay, and continuous rates in physics and finance.
How do I calculate a logarithm if my calculator only has "log" and "ln" buttons? If you need to calculate a logarithm with a custom base, such as $\log_2(32)$, you must use the Change of Base formula. You can use either the common log (log) or the natural log (ln) button on your calculator. Simply calculate the log of the argument and divide it by the log of the base. For $\log_2(32)$, you would type $\ln(32) / \ln(2)$. This evaluates to $3.4657 / 0.6931$, which equals exactly 5. This method works for any base and any positive argument.
What is an antilogarithm, and how do I use it? An antilogarithm is simply the reverse operation of a logarithm; it is the process of raising the base to the power of the logarithm to find the original number. If you know that $\log_{10}(x) = 4$, the antilogarithm is calculated by taking the base (10) and raising it to the power of the result (4). Therefore, $x = 10^4 = 10,000$. On most scientific calculators, the antilogarithm function is accessed by pressing the "Shift" or "2nd" key followed by the "log" key, which usually displays as $10^x$.
Why is $\log(x+y)$ not equal to $\log(x) + \log(y)$? Logarithms are exponents, and they follow the rules of exponents. When you multiply numbers with the same base, you add their exponents ($x^2 \times x^3 = x^5$). Therefore, logarithms turn multiplication into addition: $\log(x \times y) = \log(x) + \log(y)$. However, there is no rule in algebra that simplifies the addition of two exponential terms ($x^2 + x^3$ cannot be simplified into a single exponent). Because exponentiation does not distribute over addition, logarithms do not distribute over addition either.
How are logarithms used in computer science? In computer science, base-2 logarithms ($\log_2$) are used to measure the time complexity and efficiency of algorithms, often expressed in Big O notation. For example, a Binary Search algorithm finds an item in a sorted list by repeatedly cutting the list in half. If a database has 1,000,000 entries, a linear search might take 1,000,000 steps. A binary search takes exactly $\log_2(1,000,000)$ steps. Since $2^{20}$ is roughly 1,000,000, the logarithm tells us it will take a maximum of 20 steps to find any item. Logarithms prove that binary search is exponentially more efficient than linear search.