Dear Madams and Sirs,
[My English is not good, but I will try to do the best that I can.]
In first place, my gratitude to Didier for your so kind attention to help me to join (problems with automatic authentication). And my special gratitude to honorable people that posted fair messages about me and about other subjects (I see posts by Muhamed and Maria, but is possible have more).
Ok. Come on to the subject that brought me here. In a search in Google, I found some messages in Cerebrals forum in that my name is mentioned, and I request permission to answer these messages. I not have a long free time, but I will enjoy this.
In about dozen of messages in Cerebrals, Mr. Stevan Damjanovic (and other Sirs) mentioned me and my works as “stupid”, “ridiculous” etc. Maybe he is right. But I would like to do a small comment on this subject, to try evaluate how stupid I am and how brilliant are somebody that call me stupid. I hope my post will be appropriate to answer to all.
I have some friends with true stratospheric IQ, as Petri Widsten, Serguei Popov, Kristian Heide and Nicolau Saldanha. The Ph.D. thesis by Petri receive the award of the best Ph.D. thesis in his country in Chemistry Engineering (besides Summa Cum Laude distinction). Popov is a Summa Cum Laude Mathematician at Moscow University, received about a dozen of national prizes in old-USSR and Brazil to reward his innovative ideas, is Professor MS5 in USP and Unicamp at 33 years old. Kristian is the most fast thinker in the World in Brain Master and obtained higher score in Sigma Test VI. Nicolau conquered a gold medal in International Mathematic Olympiad 1980 and conquest some prizes for his works in Princeton, IMPA and UMPA. Petri obtained the best score in Sigma Test and Kristian the best score in Sigma Test VI, and both are members in Sigma VI. I have a little contact with other person with possible true stratospheric IQ, as Steven Weinberg (Nobel prize) and Nikos Lygeros (Nik is holder a world record in Number Theory, is creator of Eureka, Archimedes and G-Test and Albert said me other very interesting accomplishments of Nik). In my opinion, don't exists any IQ test appropriate to measure the intelligence of them. If Petri obtained a very high score in the Sigma Test, that is a way of evaluating the test, not of evaluating Petri, because the ability of Petri is more unquestionable than the accuracy of the test. Like this, if Petri has a mensan score in the Raven Standard Progressive Matrices, that is bad for the Raven test, but it doesn't affect the perception on the intellectual level of Petri. The supervised clinical tests have true ceiling about 130 or 140. Cattell III, for instance, have an unreal ceiling of 187 (191 in any country, 183 in other ones), but the items of this test is very primary to evaluate the intelligence in any level above 140. To use Cattell for IQs above 140 implicate the false premise that the person that solves simple tasks quickly is also capable to produce great and original ideas. When anybody use supervised tests to measure the intelligence in levels over 140 IQ is equivalent to try measure the literary gift of Dostoievski and Shakespeare with base in the speed that the they spell words. This definitively is not correct, because the deep and complex thought in high levels cannot be measured with base in the ability to solve simple problems quickly. Hard tests, as Mega, Titan etc. have items with difficulty appropriate for IQ at the level 160 or maybe 170 (if the nominal ceiling is 193, it is not important). The best and hardest puzzles used in Sigma Test VI, Sigma Test VI (and some supervised Mathematic Olympiads questions), similar a true life problems, maybe can measure IQ at the level 180 or a little higher. To evaluate IQs above this level, the any reasonable way is estimating based on intellectual accomplishments. In my case, I am holder a world record in longest announced checkmate in a blindfold chess simultaneous games, a world record in correspondence chess (longest announced checkmate), I created a top-10 world theoretical novelty, chosen by Sahovski Informator jurors (include world champion Viswanathan Anand), besides top-19 world theoretical novelty and top-26 world best game. I create the first method to standardized tests in proportion scale (another methods only produce interval and ordinal scales). I projected the invisible machine. I perfected the method to use to calculate stars parallax. I corrected the method to calculate the BMI (Body Mass Index). I increase the Darwin-Wallace Evolutionary Theory and other innovations. In Psychometry, I improve about ten traditional methods, include estimation "b" and "c" parameters in Logistic models of Item Response Theory, improve the method to calculate Cronbach's Alpha and other homogeneity coefficient, create a method to determine a variation of "c" with theta etc. After I win two master level ICCF tournaments, I am classified to participate in half-final world championship, I create a method to calculate factorial of non-integer numbers without use Calculus, among other innovations, creations etc. For that, our friend Baran Yönter (Pars founder), in e-mail send me in 2002 estimated my IQ about 200, and Alexandre Prata Maluf estimating my IQ about 177. Also is important say that Alexandre Maluf estimating the IQ of Marilyn vos Savant exactly the same of mine, and in the same day (therefore he thought about the comparison). My self estimating is about 190. But is ridiculous treat of rarity IQ when we talk about empirical distribution that no have good adherence to normal distribution with Kolmogorov-Smirnov Test or Andersson-Darling Test, because out the interval -2sd and +2sd the empirical distribution present enormous discrepancy in comparison to theoretical Gaussian distribution. If anybody use a Weibull distribution, or Gumpel distribution, or any more flexible functions, the problem persists. If anybody use Robust Statistics, with boostrap and/or Monte Carlo Method with Markov Chain to determine the tails of distribution, is possible find more accurate rarities, but stay very difficult extrapolate out of the size of the sample to standardize the test. For that, is nonsense talk about rarity IQs above 130 or 140 with base in the false hypothesis of the empirical distribution is normal. The true empirical distribution is not Gaussian, not Weibull, not Lognormal, not Cauchy etc. It is similar to normal (most similar to Weibull) only in interval -2sd and +2sd. If anybody use a theoretical distribution more adherent at the empirical distribution (for instance:
http://www.sigmasociety.com/artigos/norma_tig.pdf), then him can talk on rarity IQ up to the higher level determined by the sample (no more!), with some distortion inherent at any ordinal scale. My evaluation system used in the Sigma Test allows to solve this problem in an appropriate way, using potential proportions instead of rarity ordinal scale. For that, IQ 190 rarity no have any meaning, but 225 potential IQ can be defined more exactly, so numerically as conceptually. Of course, the one need see my articles in this page to understand this method
http://www.sigmasociety.com/sigma_teste ... _teste.asp. I describe very simple and didactic examples in the page of Sigma Society, and anybody can understand this method. The text is in Portuguese, but
http://www.babelfish.org solves the problem of the language.
Somebody that don't try to understand my method and they only get to repeat information that they read or heard will criticize the ideas that they don't understand, because it was always like this in the whole history: the people with intelligence a little above the average clings to the traditional culture and they condemn the innovations proposed by the true creative geniuses. Ops! Excuse me. I wanted to say: "stupid" instead "geniuses".
About the message from Nick, on the IQ of Poincare was 156 and IQ of von Newman was ~162, his doubt is legitimate. An IQ 162 is very low for Newman. Other cases are Feynman, that have only 125 IQ in a bad test, and Kasparov scored 123 in Raven and 135 in Eysenck. I made a study on RSPM and this test have 7 items (out 60 items) with any semantic mistake and all items have statistical mistakes (about 70% of alternatives of all items not are preferential attractors, that is nobody choose about 70% of alternatives in a typical sample with average IQ about 100). Obviously Feynman and Kasparov are universal genius, with rarity IQ above 190 and potential IQ above 220. If you read some texts by Poincare about time and space, you find the same ideas in that Einstein and Lorentz founded the Relativity Theory, but Poincare think on this before Einstein and Lorentz. An rarity IQ about 156, you find a person in each corner (about 1 in 3000). Probably 156 is the ceiling of the test used in Poincare, or maybe the test superevaluated the speed to solve basic questions. A prominent mathematician/scientist as Newman and Poincare probably have an rarity IQ above 180 and potential IQ above 200. The message from Stevan on this subject, is not exactly (in reality, his opinion is totally wrong), because between IQ and accomplishment have Pearson’s linear correlation coefficient (and Spearman’s correlation, and tau-Kendall correlation etc.), and this correlation varies in function of numerous factors (some of these factors are IQ level, accomplishment level, size of the simple, variance of the sample, kurtosis of the sample etc.). A little example: among people with IQ lower than 120, the Pearson correlation is stronger than among people with IQ higher than 150, if you consider the correlation between IQ and practical accomplishment. But if you consider the correlation between IQ and abstract accomplishment, the result is inverse. Therefore, in pure mathematic (very abstract) have strongest correlation between IQ and accomplishment, mainly in high levels. In applied mathematic (not so abstract) have strong correlation between IQ and accomplishment, mainly in high levels, but no so strong as in pure mathematic. Then is very strange the low score 162 of Newman. About 1950, the world population was about 2,5 billion and there were about 100.000 people with rarity IQ above 162, but not was 100.000 people with intelligence above Newman.
Of course, a superficial analysis on this subject, as make Stevan, creates confusion and it leads to wrong interpretations on the facts. Unhappily I don't have free time to read and to correct all of mistakes committed by Stevan (and other ones). But if he continues to saying foolishness involving my name, I will dedicate a little time to offer more some severe classes to him.
To the people that just know the basic of the basic on Statistics, I suggest that they research more on this subject. The Gaussian distribution simetrical and mesocurtic is not the only way (nor the best way) for representing distributions of IQ (and many other empiric data). Most of the time a Gaussian fails completely for points in the tail out of the interval -2sd up to +2sd. Have many manners of doing robust adjustments that turn more representative the theoretical distributions of the empiric data (boostrap is very used for this). It is very important knowing, at the least, a little on this before saying injurious nonsenses.
I don't know these subjects since forever. I deduced a big part and I learned another part. I see that some people here are making very superficial critics without the minimum of knowledge and none discernment. It is bad for these people and for this group.
These are some recommend sites on Statistics for persons sincerely interest to learn on this.
http://mathworld.wolfram.com (good on basics, a little on advanced)
http://www.statsoft.com/textbook/stathome.html (good on basics and advanced)
http://www.itl.nist.gov/div898/handbook/index.htm (good on basics and advanced)
http://www.sigmasociety.com/st_stat_tools.pdf (good on robust and very advanced, available exclusively for friends).
To throw in the garbage all of the facts and to fake that the distributions of IQs are a gaussian, and to build an entire mythology on this and to impose a ceiling of rarity around 200 IQ is that make everybody that don't know the basic on the subject.
Summarized: if a person not very intelligent and much arrogant doesn't know anything about Statistic (only know average, standard-deviation and Gaussian distribution), he can suppose that all the empiric measures need to distribute normally. Another person, with intelligence larger than his own arrogance (like me), that also doesn't know anything on Statistic (like me, few years ago) he distrusts of the theoretical model, because he sees that the empiric data differ brutally of the theoretical distribution, and this difference increases progressively in the extremities of the distribution. Then the person with lower intelligence and higher arrogance thinks: "the real world is not working as the theory says, then the real world is wrong. I will repair this so that it works in agreement with the theory and me will prohibit that any measured empiric it is larger than the theory allows. If anybody disagree of me, the one is wrong and I will make fun a lot of this foolish poor." The person a little wiser thinks: "the theory is not completely appropriate, then I need a theoretical model better for describe this case and to share this with other people interested in better understanding the facts." Like this the people with different levels of understanding of the world arrives in different conclusions and their opinions diverge.
When an intelligent person and with good character doesn't understand some thing, this person tries to know and to understand before to opine. When a person obtuse and/or unscrupulous doesn't understand some thing, he mocks, he condemns etc.
I want to believe that this unpleasant situation was an isolated misuntertanding and that won't happen again. I hope post more times in this group in the future, but for some pleasant reason.
One more thing: about a question posted by Aaron Ellison, on "what is standard-deviation?", maybe I can offer a little help in this subject. If you measure anything a lot of times, or if you measure a lot of similar things, you find different results. For instance, if you measure the height of 1000 people, you find more persons between 1,65 m and 1,75 m than the number of persons between 1,85 m and 1,95 m, because the average height is about 1,7 m and the data concentrate close to the average. If you plot all measures in a bars chart, you will have a figure similar to a Gaussian Distribution (bell curve). Like this you can summarize your full data with few parameters: central tendency, dispersion, kurtosis and skewness. It is very useful, because if you have a sample with million of elements, you not need a very large information to understand the main properties of your data. Is enough to consider this 4 fundamental parameters. The central tendency can be determined for the mean, median, mode and other basic ways. Also have some more accurate methods to determine the central tendency, as Tukey's biweight, Andrews wave, M of Hampel etc. (see the 2005 norm of the Sigma Test in
http://www.sigmasociety.com/sigma_teste ... _teste.asp). The dispersion is the parameter that inform how much the data are close to the central tendency. If you obtained an IQ score 148 in a test with mean 100 and standard-deviation 24, it is equivalent to obtained an IQ score 132 in a test with mean 100 and standard-deviation 16 or equivalent to obtained an SAT score 1250 with mean 940 and standard-deviation 155. The kurtosis determine how much thin or "fat" is the data distribution in comparison to Normal distribution, and skewness determine the asymmetry of the data distribution. An extreme simplification consists of supposing that all of the distributions of the world are normal (mesocurtic and symmetric Gaussian). Of course in the real world this not work in this way exactly. For that, exists some methods to evaluate how much the empirical distribution is similar to the theoretical distribution. For that we can use adjust of quality tests (or adherence tests), like Shapiro-Wilk, Jarque-Bera etc., based on Chi-square method. Or we can use more accurate tests (more sensible at the form of the distribution), as Anderson-Darling or Kolmogorov-Smirnov. If the empirical distribution data is very similar to the theoretical normal distribution, than we can use this distribution in our study to calculate some properties on our sample and on the population that our sample came from. But if the empirical distribution data is very different to the theoretical normal distribution, of course we need compare our data with other distributions, like Gamma, Lognormal, Cauchy, Poisson or more flexible distributions as Weibull and Gumpel. See this article:
http://www.sigmasociety.com/artigos/norma_tig.pdf. In a lot of cases, you can find two or more different theoretical distributions to describe different parts of your empirical distribution. For instance: between -1,5sd and +2sd you can use a Weibull with good adherence, but in the tails you need a Pareto distribution. Also is possible you have a sample in that no have any theoretical distribution with good adherence. In that cases, you can use the bootstrap method. Also is important remember that standard-deviation is only one of the methods to determine the dispersion. Also have more accurate systems to determine the true dispersion, as Stahel deviation, Ronchetti deviation etc. The most of psychometrists don't know anything on this, for that they said big mistakes on this subject, include they injury other ones without true reason. Is very dangerous believe in all that you read. I don't want to offend psychometrists with my comment. This is directed to an unique person. But possibly others can benefit with the recommended readings (specially factor analysis, hierarchical cluster analysis, neural network and item response theory). I suggest visits in these sites to learn more on this subject:
http://www.statsoft.com/textbook/stathome.html (good on basics and advanced)
http://www.itl.nist.gov/div898/handbook/index.htm (good on basics and advanced)
http://www.education.umd.edu/EDMS/tutor ... tpage.html (good on irt, specific for tests)
Cheers!
Melao