want to Know why you can't divide with 0?

You can divide by zero.

How many times can I give away a car to nobody? Infinity to the infinite power.

It's not that you can't, the number produced will always be an infinite googolplex infinity + 1 and a calculator would explode, so they made it error out.

That's just my opinion though, I could be wrong.
It's not infinity. As a number in the denominator approaches 0, the answer is infinity. But 0 itself is not infinity.

1/x lim x->0 = Infinity.
 
I'm bowing out of this debate, leave it to you bone heads to discuss it..:whistling:
 
It's really more complicated than "it's defined" or "it's not defined". It depends on the mathematical context you're talking about.

In most areas of maths (elementary maths, algebra, simple calculus) it's generally classed as not defined. There's many ways of proving this, or at least showing the results yield nonsense (it's trivial to show that 1=2 for instance if you allow dividing by 0.) The problem with the limit argument to infinity is that it can be shown just as strongly that the limit approaches negative infinity as it does positive infinity, thus giving conflicting limits and rendering the argument contradictory.

The majority of cases where some form of division by 0 is defined therefore, involves the resolution of positive and negative infinity - the definition of an "unsigned infinity." There's a few specialised cases where this is assumed - usually in the context of number lines (in fact they're the only cases I can think of where this is the case, though my maths is rather rusty in this area.) The real and non-negative projected number lines (and the Riemann sphere, which is the same thing but including complex numbers) define unsigned points of infinity, where conventionally x/0 (where x is non negative) can then be somewhat sensibly defined.

The programmers amongst us may know that in IEEE754, +0 and -0 are two different things which is a separate approach that allows us to define a 0 division; instead of removing the sign from infinity, we're appending it to 0. In pure maths this is bogus, but allows us to deal with practical computation situations that arise (such as overflow/underflow.) In practice however, many languages that implement mathematical operations still leave the result as NaN (not a number) rather than positive or negative infinity. In other words, it's implementation dependent.

There may be other areas I don't know about, but the fact of the matter is it's really not as simple as a comprehensive yes / no answer. It depends on the context and unless you have a very strong mathematical background, is very difficult to answer conclusively (though in absence of a specific context, and in terms of simple maths the answer is almost definitely no.)
 
It depends on what sense you are approaching it as well. You are approaching it in a theoretical sense, looking for the reason why it's undefined. In an engineering sense, it's simply undefined. I don't derive the equations, I just use them. I leave it to the PhD's such as yourself to derive them, haha.
 
wow, that is some reply. thanks for posting. rep+ (if that still was possible on this forum)
but this thread is getting out of hand. i just wanted to post a videos.
Fair point, I just wanted to throw a reply that was a bit more complete in there ;)
 
Fair point, I just wanted to throw a reply that was a bit more complete in there ;)

Yeah, thank you for that. Rep+1 (if that was still possible on this forum)
But this thread is getting out of hand, i just wanted to post a video i liked.
 
Wow that was pretty in depth aha. I congratulate you on going that far and actually explaining your theory other than just shouting at us. I think I can understand why some people are stuck on dividing and others aren't.

I guess to say it is situational is rather agreeable?
 
1/x lim x->0 = Infinity.

You're putting a filter on the result.

I'm speaking in simple terms of 1/x=y.



In most areas of maths (elementary maths, algebra, simple calculus) it's generally classed as not defined.

Perhaps there is better definition in representation.

The infinity symbol is also sometimes depicted as a special variation of the ancient ouroboros snake symbol. The snake is twisted into the horizontal eight configuration while engaged in eating its own tail, a uniquely suitable symbol for endlessness.

[...]

As in real analysis, in complex analysis the symbol , called "infinity", denotes an unsigned infinite limit.

source



The majority of cases where some form of division by 0 is defined therefore, involves the resolution of positive and negative infinity - the definition of an "unsigned infinity." There's a few specialised cases where this is assumed - usually in the context of number lines (in fact they're the only cases I can think of where this is the case, though my maths is rather rusty in this area.) The real and non-negative projected number lines (and the Riemann sphere, which is the same thing but including complex numbers) define unsigned points of infinity, where conventionally x/0 (where x is non negative) can then be somewhat sensibly defined.

Using the number line example much more literal;

Just because you cannot draw a dot on the number line doesn't mean the answer shouldn't be contemplated or understood.

There may be other areas I don't know about, but the fact of the matter is it's really not as simple as a comprehensive yes / no answer. It depends on the context and unless you have a very strong mathematical background, is very difficult to answer conclusively (though in absence of a specific context, and in terms of simple maths the answer is almost definitely no.)

If you were not wanting to get caught in a programming loop, that would be good context ;)
 
Last edited:
I'll take on the 1/x = y thread of discussion. Consider this a little weekend algebraic acrobatics. :)

This is what we know:
1/x = y/1
1/1 = xy

So, if x = 0.1:
1/0.1 = 10
1/0.1 = 10/1
1/1 = 0.1 × 10

If x = 0.01:
1/0.01 = 100
1/0.01 = 100/1
1/1 = 0.01 × 100

If x = 0.000001:
1/0.000001 = 1000000
1/0.000001 = 1000000/1
1/1 = 0.000001 × 1000000

And so on.

Now, it would seem as x becomes a really, really small number, y becomes a very, very large number. It is only natural to plug 0 and ∞ in x and y, respectively, as 0 carries a certain attribute of smallness, and ∞ carries a certain attribute of largeness.

But this is not right.

If indeed:
1/0 = ∞

Then:
1/0 = ∞/1

We should have:
1/1 = 0 × ∞

Because anything multiplied by 0 is 0, this means:
1 = 0 (a. k. a. WTF?)

Since 1 is not 0; if x should be 0, y should not be ∞.
 
Last edited:
Back
Top Bottom