Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2012 January 6

From Wikipedia, the free encyclopedia
Mathematics desk
< January 5 << Dec | January | Feb >> January 7 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 6

[edit]

How did mathematicians figure out trigonometric raios?

[edit]

Like how to use sin, cos, tan, etc. I always thought it was amazing that they can find a side that way or find an angle.

Thanks a lot. — Preceding unsigned comment added by 139.62.223.182 (talk) 00:23, 6 January 2012 (UTC)[reply]

Basically by thinking. Sine, cosine, and so on are fairly simple - they are defined as the ratio between two sides of a right triangle. The core insight is that this ratio is the same for a right triangle with the same angles, no matter what the absolute size of the triangle is. The field of maths is called trigonometry, and the first systematic exposition (still extant) was Euclid's Elements, written around 300 BCE. --Stephan Schulz (talk) 00:36, 6 January 2012 (UTC)[reply]
Exactly. I would say that the only difficulty would have come from computing the values of the functions. I'm not sure what method they would have used to create the tables that they used, but you can calculate them to any degree of accuracy by evaluating sufficiently many terms in their Taylor series. Fly by Night (talk) 00:53, 6 January 2012 (UTC)[reply]
See CORDIC for a modern method of calculating the values of trig functions. AndrewWTaylor (talk) 01:04, 6 January 2012 (UTC)[reply]
Presumably the first trig tables were produced just by measuring accurately and dividing lengths in large right-angled triangles, as is sometimes done in schools to introduce trigonometry. Dbfirs 10:47, 6 January 2012 (UTC)[reply]
Thanks for the link Andrew. I hadn't heard of the CORDIC method before. I was quite surprised reading the CORDIC article. It seems, or at least the article makes it seem, that the CORDIC evaluation is far more complicated than the evaluation of a polynomials (i.e. a truncated power series). But I suppose that that reflects the state of affairs: computers are very good at doing the same thing a trillion times in a row. Fly by Night (talk) 21:37, 6 January 2012 (UTC)[reply]
Computing a table of trig functions suggests different methods than computing just one or a few values. As an example, let me consider antilogarithms first, because they're simpler to imagine.
If you have a single number x and want to compute exp(x) to ten digits, then indeed a method involving power series is the best. But suppose that instead you want to compile a full table of antilogarithms to four or five digits precision. For this, you could start from exp(c) where c is a small number, then compute exp(nc) for each natural number n by repeated multiplication of precise result. (The obvious method would be to compute exp((n+1)c) = exp(c)exp(nc) all the time, but this is wrong: it loses precision very quickly. Instead you keep doing something like exp((n+2k)c) = exp(2kc)exp(nc), where k is the greatest natural number such that 2k < n, because then each number in your table will be a result of O(log n) multiplications, thus you get precise results.)
If you want a table of base 10 antilogarithms, you don't in fact need to be able to compute even a single starting value 10c in the first place. Instead you just choose a small positive number C, assume 10c = 1 + C and compute the value of 10nc by repeated multiplication as above, all without knowing the actual value of c. Finally, when you get near the end of the table, where 10nc ≈ 10, you solve 10xc = 10 with interpolation from your table, thus you know c = 1/x.
Computing sines and cosines works similar to the above. If you need just one value, then a solution based on power series is probably the best. If you need a full table, you repeatedly multiply a unit magnitude complex number (represented by its real and imaginary parts, and be sure to normalize each value to unit magnitude). Here too, you can start from any unit magnitude complex number, and you will compute its angle when you get to the end of the table near exp(/4).
b_jonas 16:54, 9 January 2012 (UTC)[reply]
Update: clarified on what k is.

solving all quintics by algebraic means

[edit]

i read an article dated 5th October 2011 on this Wikipedia about solving all quintics by algebraic means. I am the discoverer and introducing new mathematics methods which will be helping scientists and mathematicians in the next generations. Right now, i am about to publish the paper and inviting mathematicians to challenge the research paper. Also to AndrewWTaylor, you said something about NOBEL PRIZE awaiting for such discovery.i am really interested. — Preceding unsigned comment added by NII AFRAH (talkcontribs) 12:20, 6 January 2012 (UTC)[reply]

I charge a standard fee of $1000 (US) to provide a detailed analysis of such "discoveries". Sławomir Biały (talk) 12:30, 6 January 2012 (UTC)[reply]
Sławomir Biały makes a very reasonable offer, but here is a short check-list that you might want to run through yourself:
  1. Does your method truly apply to all quintics, with no exclusions ? Will it work for  ?
  2. Does it only involve addition, substraction, multiplication, division, taking roots and no other operations or functions (so no trigonometric functions or Bring radicals, for example) ?
  3. Does it always terminate in a finite number of steps - so no unbounded iteration loops, for example ?
  4. Does it result in an exact solution or set of solutions, rather than an approximate solution with an error term that can be reduced by additional calculations ?
  5. Can you provide full and explicit details for each step - for example, anywhere that you have said "obviously", "it is clear that" or "anyone can see that", can you fill in the missing details ?
  6. Can you explain how your method is consistent with the Abel–Ruffini theorem ? Or, alternatively, why the Abel–Ruffini theorem is wrong ?
If you are absolutely sure that you can answer "yes" to all these questions then you may have discovered something genuinely new and notable, and your next step should be to write a paper and submit it to a reputable journal. Gandalf61 (talk) 13:28, 6 January 2012 (UTC)[reply]
Very nice answer. I'll add that this proposed general solution for all quintics would not only contradict Abel-Ruffini, but also would indicate that there are severe problems with the correctness of all of Galois theory, which allows for very concise modern proofs of A-R (given in that article). A key concept here is that, unlike science, mathematics generally does not move forward by undermining previous claims. Once a proof is published and accepted (say for ~50 years), it is (almost) never falsified by later work. SemanticMantis (talk) 16:10, 6 January 2012 (UTC)[reply]
Just show us your solution to . Bo Jacoby (talk) 16:32, 6 January 2012 (UTC).[reply]
I'd say the closest example to claims being undermined are the creation of Spherical and Hyperbolic Geometries, but that was disproving those who believe that no consistent structure could be built with the 5th postulate changed. And the only reason that it took so long for Fermat to be disbelieved in terms of Fermat's Last Theorem was that he was arguably one of the most famous mathematicians in Europe, though there were certainly those who challenged him for the proof within his lifetime. As for a Nobel Prize, Wiles didn't get one and FLT is much more famous. Nash's work that did get him the Nobel in Economics was actually useful to Economists directly, I'm not sure this one (even if correct) is.Naraht (talk) 16:58, 6 January 2012 (UTC)[reply]
As the OP mentioned my name, I should perhaps point out that my main contribution to the previous discussion was to point out that such a discovery would be more likely to lead to a Fields Medal than a Nobel prize. AndrewWTaylor (talk) 17:13, 6 January 2012 (UTC)[reply]

MY ANSWERS ARE ALL YES. My discovery is going to change the view of ALGEBRA if it enters into the mathematics domain. Galois,Abel and others missed two key equations during their research and I also say that Galois and Abel were gifted with this discovery but they died too early. Abel and Ruffini came out with the "impossibility Theorem" because they also missed the key equations before Galois concluding that " polynomial could be solved by means of a general algebraic formula only if the polynomial has a degree less than five". I see Galois conclusion and the "impossibility Theorem" to be a false statement algebraically. After their death, mathematicians were researching into that but they focused more on the existing mathematical tools. Now I have the solution and ready for a challenge. — Preceding unsigned comment added by NII AFRAH (talkcontribs) 17:17, 6 January 2012 (UTC)[reply]

You could start by uploading your paper to the arxiv, so that we all (and others) could read it. This is often done by researchers just before (or at the same time) they submit the manuscript to a publisher. Staecker (talk) 17:48, 6 January 2012 (UTC)[reply]
Or as said above give a solution for . I'd find that extremely interesting and pretty convincing. Dmcq (talk) 18:48, 6 January 2012 (UTC)[reply]
  • It has been really nice to read all of the replies given (especially Gandalf's). Unfortunately it seems, to me at least, that this is a classic case of trolling. The OP was given, more than once, specific problems to apply his new theory to, and yet failed to even acknowledge the questions. I know that it's very tempting to think "what an idiot" and then to throw lots of detailed mathematical theory at the OP in the hope of making him realise the error of his ways. This would work if the OP were an arrogant, yet well-meaning, novice; but the OP does not appear to be well meaning (although s/he is clearly a novice). I would ask you all to deny recognition and to not make any further posts. Fly by Night (talk) 21:50, 6 January 2012 (UTC)[reply]

I will like to upload it on arxiv.org, so check out for it.— Preceding unsigned comment added by NII AFRAH (talkcontribs) 23:35, 6 January 2012 (UTC)[reply]

I have a better idea: post a link on here when you've uploaded it. --COVIZAPIBETEFOKY (talk) 00:25, 7 January 2012 (UTC)[reply]
I completely agree with Fly by Night's suggestion; this post is just SPAM. --pma 12:09, 8 January 2012 (UTC)[reply]

What would it cost to wall off Pakistan from Afghanistan?

[edit]

The reason is because the Taliban are free to roam between the countries; that's why there's a constant problem with Taliban resurgence.

Could the DoD take Israel's example? They walled off Palestinian-held territories, and look at how that stifled attacks by the Palestinians.

Therefore, couldn't we better control the Taliban menace by walling off the border? --70.179.174.101 (talk) 14:03, 6 January 2012 (UTC)[reply]

This is not a math question. You might try again at another desk.--RDBury (talk) 14:56, 6 January 2012 (UTC)[reply]
See Durand Line for the history of the Pakistan-Afghanistan border, and a figure for its length. See Israeli West Bank barrier and Israel and Egypt–Gaza Strip barrier for information on the Israeli barriers to which you refer. Qwfp (talk) 15:36, 6 January 2012 (UTC)[reply]
You can't just build a barrier and leave the area, since they would just put holes in it, tunnel under, or climb over. A barrier only works if it's continuously guarded. Of course, you can also continuously guard the border without a barrier. The barrier itself isn't necessary, although it does help by slowing crossings to give border guards time to catch those crossing.
The barrier probably shouldn't be right at the border, both because that may not be the most defensible position, and because those building it may come under fire from Taliban in Pakistan. On the surface, perhaps just razor wire might be best, since it's cheap and you can easily add more to patch holes put in by those trying to cross.
Ideally we could put in both infrared cameras and remote-controlled guns aimed at the barrier, so we could watch and open fire on anyone trying to cross. Of course, the Taliban on the Afghan side would try to destroy these, so they would need to be protected from that, too. Perhaps they could be placed on mountain peaks in locations only accessible by helicopter.
Land mines are also a possibility. Of course, they are not politically popular because they can kill or wound innocent people long after a war ends. If they were placed between sections of barbed wire, that might keep innocent people out. They can also be designed to inactivate after some period of time, although I expect this war to last for decades. It might be better to put in low-power landmines only designed to wound, since those missing legs would become ineffective fighters, would be a more visible warning to others, would make identifying networks of Taliban easier, and might be "worse than death" for those who want to become a martyr.
Another problem is caves crossing the border. We need to find all of those, and collapse them with explosives. Bribing smugglers and others who know about them is one way to find them. Perhaps some technology like ground-penetrating radar could help to locate them. StuRat (talk) 17:05, 6 January 2012 (UTC)[reply]
I suspect some of these ideas may have already come to the U.S. government's mind ([1], [2]) -- clearly someone has seen Terminator too many times, and thought "that's f'n great, I'll have me some of that!" Honestly, that's a bad message to take home from that film. -- The Anome (talk) 19:20, 6 January 2012 (UTC)[reply]
I should point out that much of the border between Pakistan and Afghanistan runs along a series of ridges in Western Himalayas, it would be quite challenging to build a wall all the way, even if it's just a razor wire fence. Along the way, it reaches a number of peaks, including Kohanha, Kohe Baba Tangi, Sakar Sar, Rahozon Zom, Lunkho e Dosare, Akher Tsagh, Kohe Urgunt, Langula E-Barfi, Kohe Shakhawr, Kohe Mandaras, Gumbaz E-Safed, and, most importantly, Noshaq, which are so thoroughly unknown to Westerners that most of them don't even have Wikipedia entries, even though each one of these is taller than the tallest mountain in North America - Mount Denali. It is very difficult to get to the top of any one of these mountains (let alone carry supplies to build a wall there), even the most specialized helicopters can't land or take off at that altitude, you have gale-force winds blowing nine months a year, heavy snowfall, and frequent avalanches threatening to take out your fence.
But at least you didn't propose to build a fence along the border between Pakistan and China - which goes over K2.--Itinerant1 (talk) 23:04, 8 January 2012 (UTC)[reply]

I need a mathematical function that scores 0 for "infinitely incorrect estimate" or "no answer" and 1 for "exactly precise answer"

[edit]

I'm helping a friend with an experimental design for a cognitive test. (She's the psych student, not me!) For a cognitive test, she has to score groups (based on whether they work together or individually) on how accurate their estimates are. Sample questions include: "The number of miles between this building and the nearest Dunkin Donuts" or "Number of rooms in the local Red Roof Inn" etc. (fairly arbitrary and yes I realise there are issues). Her predecessor used Mean absolute percentage error, but there are problems with this, in that someone who answers "eight million" to "number of miles between the Earth and the sun" is basically an order of magnitude off, but far far off in terms of MAPE, and this will distort the subject's overall score even if he gets every other question perfectly right, and might even perform worse scorewise than subjects who answer 90 million for that question but are grossly off by an order of magnitude for every other question, if the other questions use small quantities.

We'd like for an answer that is way off base to be as good as giving an error of "zero" or no answer at all, and for an answer that is perfectly precise to have a score of one.

I thought of the normal distribution, but then variance parameter would be sort of arbitrary. How would one decide how quickly the score drops off? I thought of the properties of the Gini coefficient. How would I use that? elle vécut heureuse à jamais (be free) 18:32, 6 January 2012 (UTC)[reply]

It appears to me that your problem is with scale. You need to normalize the answers/results of all of the questions before doing the calculations. I regularly normalize values that are in the 0.00-3.00 range with values that are in the 30-300 range. In the 0.00-3.00 values, I divide all values by 3 and I get values that are 0.00-1.00. In the 30-300 range, I subtract 30 and divide by 270 to get values from 0.00-1.00. I can do this because both of my measures have normal distribution. If you don't have normal distribution (which I get with some things, like serum-creatinine measures), I take all my values, square them, sum up the squares, take the square root of that sum, and then divide all values by the square root. That shifts the values so that they are between -1 and 1 (but since mine are all positive, I get 0 to 1). Once everything is between 0 and 1, you won't have issues with scale. -- kainaw 18:45, 6 January 2012 (UTC)[reply]
I think the mean absolute percentage error is fine butUse log of value of guess over actual value then just order the results on that and use non-parametric statistics for the rest. You probably have a statistician around who can help you with setting it all up.
I second the approach of using non-parametric statistics. This kind of thing is what it exists for. Unfortunately, it does require a statistician to understand it properly, but if you're doing real modern science, you should have one on tap already: or at least now you know to go and have a look for one. -- The Anome (talk) 19:24, 6 January 2012 (UTC)[reply]
I don't think she can simply scale the MAPE score like is being described -- the issue is disparities within a single question. For example a subject who answers "77 million" for "number of people in China" being compared with subjects who answer "1.4 billion" has a percentage error of 94.2%, but a person who answers "90 million" for "population of Israel" might have a percentage error of over 1200%.
Re: non-parametric statistics -- I'll go tell her that. elle vécut heureuse à jamais (be free) 19:36, 6 January 2012 (UTC)[reply]
The log of the guess over the actual value tells if someone is closer or further away than another person for a particular question like the population of Israel. It simply is saying that double the figure is as bad as half the figure. The non-parametric bit then just gives an ordering for each question so it doesn't matter if some questions tend to be answered less accurately than others. Dmcq (talk) 22:24, 6 January 2012 (UTC)[reply]
Why not devise a discreet marking system. Use the ratios to mark each question, i.e. (their answer) divided by (the correct answer). Then set up a marking scheme of, say, {0,1,2,3} where you assign a score depending on the answer given. You would need to decide how accurate you want them to be, e.g. give them a 0 if the ratio between their answer and the correct distance to the sun is of the same order, a one if it is ±1 order of magnitude and two if it is ±2 orders of magnitude. You can change the scoring system as much as you like depending on the question. Just choose a mark scheme for each question. Fly by Night (talk) 22:08, 6 January 2012 (UTC)[reply]
At first glance, it seems the issue is merely a matter of normalization; as Kainaw explained above, any set of data can be normalized to a standard range for direct comparison.
But the problem with that approach is more subtle; the entire concept is ill-formed. The request is to construct a simple function that is applicable to a wide variety of data-domains. This can't be done properly; numerical error in one question is not directly comparable to numerical error in another question, even when normalized.
To avoid being overly mathematical (given that the OP's friend is a psychologist, I presume), I constructed a "counter-example" series of estimation questions. Hopefully this will elucidate the problem
  • "What year was the Declaration of Independence drafted?"
  • "How distant is the moon from the Earth?"
A "correct" answer to the Orbit of the Moon should accept the variance of the moon's elliptic orbit. At its closest, the moon is about 360,000 km; and at its most distant, it is about 400,000 km. So a "correct" answer really ought to be acceptable to a range within ±5%. If the same degree of error is acceptable for the first question, any year between 1687 and 1864 should be acceptable! If these two questions are scored on the same scale, mixing up the American Revolution and the American Civil War is numerically equivalent to a minute error of astrophysics! The test is ill-formed; the questions can't be compared on a numerical basis. In fact, the units of time (years since a particular calendar-date) and the units of distance are so dissimilar, there's no sensible way to normalize the values. At best, you can normalize to the distribution of answers provided by a control-set of individuals - much the way a standardized test is constructed.
It is implausible to construct a simple mathematical formula that can account for the huge variations in domain-specific acceptable tolerances. The field of computational heuristics is widely studied in artificial intelligence and computer science; it's very difficult. We have an article on Fermi problems, which may help give you some insight. Different scales have different units, so trying to compare "percentage-error in years" and "percentage error in kilometers" is a fundamental failure of dimensional analysis. It can't be done.
Your friend needs to construct a set of standard criteria for each question to determine an appropriate score. Nimur (talk) 22:26, 6 January 2012 (UTC)[reply]
The "classic" way of dealing with outliers with respect to averages is to use the median rather than the mean. Wikipedia doesn't have an article on the "median absolute percentage error", but a quick Google search shows that use of MdAPE is not unheard of, though I can't speak to how frequently, or if it would be well regarded in a cognitive psychology context. -- 140.142.20.101 (talk) 22:29, 6 January 2012 (UTC)[reply]

A mathematical function that scores 0 for "infinitely incorrect estimate" and 1 for "exactly precise answer" is where x(>0) is the actual answer, x0(>0) is the correct answer, and σ(>1) is a factor describing the required precision. Bo Jacoby (talk) 11:11, 7 January 2012 (UTC).[reply]