Following are free excerpts from section II.I.II of my recent book “Philosophy Unscrambles Dark Matter”.

II.I.II. The actual Anomalies

Not only that redshifts of far off galaxies are not due to Doppler’s Effect and thus do not represent radial velocities, there are also genuine problems in the established distances of those galaxies. Official distances of nearer objects have been determined by using common techniques of geometry and these distances are correct. The case of far off galaxies was challenging where finally a method to use luminosity of certain kind of stars (i.e. Cepheid Variable) as indicator of distance was adopted. Theory of redshift-distance relationship was already published where distances had been determined on the basis of luminosity alone and it was afterwards that important works of Fritz Zwicky (1933 and 1937) emerged where he analyzed a huge cluster of galaxies with respect to the total size as well as luminosity and redshift profiles of individual member galaxies of that cluster. The type of Zwicky’s analysis offered a great signal concerning the applicability of straightforward geometrical technique for the determination of galactic distances but unfortunately the hint was taken up by the Expansionists (i.e. relativists) and never brought to the general view in the original simpler form because after sensing the anomaly in this aspect, Expansionists distorted the facts and implemented the hint within twisted expansionist terminology and framework to keep the anomalous results hidden from view. Before discussing the hint, it is better to apply simple technique of geometry to determine the distances of (i) Moon and; (ii) the Sun.

Here we have taken standard values for ‘angle of view’ and ‘diameters’ of Moon and the Sun from online sources[i]. Our calculated distances of both the objects are only slightly different from official distances therefore we regard this method as accurate for the purpose of evaluating the distance. The hint that we get from the works of Zwicky is that he has estimated or determined the total diameter of the Coma Cluster. In his 1937 paper, he has taken radius of Coma Cluster to be only 2 million light years which is actually wrong as the up-to-date official estimate is 10 million light years and the reason of underestimation was that by that time, distance of Coma Cluster was also underestimated to be only 45 million light years which, by modern and ‘finalized’ standard is almost 321 million light years. The hint that now we can get is that the diameter is 20 million light years[ii] that should form an ample measurable angle of view on the sky. Now, instead of relying on measurements of the distances on the basis of luminosity, let us here calculate the distances of few prominent astronomical objects on the basis of simple geometry.

We have seen that the above method of distance estimation of astronomical objects requires (i) angle of view on sky and; (ii) actual or approximate diameter of the object. Historically, the estimates regarding distances of beyond Milky Way galaxies started in 1920s on the basis of luminosity of certain kind of stars because perhaps no data, calculation or approximation about diameters of those astronomical objects was available at that time. In 1930’s, works of Fritz Zwicky and others featured estimates regarding diameters of astronomical objects located far beyond Milky Way. Initially those estimates were wrong but they were improved and corrected over time. Angles of view on sky of those astronomical objects were also not difficult to figure out that were determined eventually but matters were in the hands of Expansionists who contaminated simple techniques of geometry with formulas of redshifts,[iii] possibly after having sensed the type of anomalies that must have surfaced in case the straightforward methods were implemented. As a matter of fact, so far simple distance determination method has not been applied[iv] even for the case of Andromeda which is the nearest large galaxy; whose official diameter in light years is known[v] and angle of view on sky is also known[vi] to be slightly larger than six times the angle of moon. Likewise the estimate regarding diameter of Coma Cluster is available and angle of view on sky is also known to be almost four times the angle of moon[vii]. Here, for our analysis, we select another astronomical ‘object’ i.e. the famous Hubble Deep Field image that belongs to tiny section of sky whose angle of view on sky is almost 10 times smaller[viii] than that of moon but contains ten times more galaxies than Coma Cluster due to which we can get a rough but safe (lower side) estimate of diameter of almost 60 million light years because with almost 1000+ galaxies, from edge to edge, there should be average 33 galaxies in Coma Cluster and with 10000 galaxies in deep field image, the number of edge to edge galaxies should be 100 which is three times greater therefore we take diameter of deep field image to be three times greater than that of Coma Cluster. We however regard it as lower side safe estimate because deep field is not a cluster of relatively compacted galaxies (having compressed in-between distances) which is the case with Coma Cluster.

Hubble Deep Field Image – Credit: NASA/ESA

By applying the straightforward geometrical method we get following ‘anomalous’ estimates of distances of these astronomical objects:

For Andromeda Galaxy, we notice a large discrepancy in the distance of 3.9 million light years calculated through straightforward geometry in comparison with the official distance which is only 2.5 million light years. Likewise, the official distance of Coma Cluster is only 321 million light years but geometry is telling it should be located at almost 509 million light years. And the case of Deep Field Image is particularly ‘anomalous’ because the calculated distance is located far beyond the permitted zone of the so-called standard model. Readers are requested to recalculate these figures by themselves to see the genuineness of the results. For instance, how come a huge cluster of galaxies that contains almost 1000 separate galaxies having large distances in-between as well, forms a smaller than a single galaxy Andromeda’s angular view on sky and yet located at distance of only 321 million light years when Andromeda, a single galaxy, itself is officially located at 2.5 million light years? A very large object that is larger by ratio of many thousands and not by ratio of only many hundreds is appearing smaller – it means that distance of the object must not be as low as only 321 million light years. It is making perfect sense that Coma Cluster’s actual distance is more than 500 million light years.

It is stated earlier that so far straightforward geometric method has not been applied for distance estimation even for the case of Andromeda which is the nearest large galaxy and whose diameter as well as angle of view on sky is known. Following section of Wikipedia article explains which methods have been applied so far and anyone should wonder why simple geometry has not been applied so far[ix]:

Distance estimate

At least four distinct techniques have been used to estimate distances from Earth to the Andromeda Galaxy. In 2003, using the infrared surface brightness fluctuations (I-SBF) and adjusting for the new period-luminosity value and a metallicity correction of −0.2 mag dex−1 in (O/H), an estimate of 2.57 ± 0.06 million light-years (1.625×1011 ± 3.8×109 astronomical units) was derived. A 2004 Cepheid variable method estimated the distance to be 2.51 ± 0.13 million light-years (770 ± 40 kpc).[2][3] In 2005, an eclipsing binary star was discovered in the Andromeda Galaxy. The binary[c] is two hot blue stars of types O and B. By studying the eclipses of the stars, astronomers were able to measure their sizes. Knowing the sizes and temperatures of the stars, they were able to measure their absolute magnitude. When the visual and absolute magnitudes are known, the distance to the star can be calculated. The stars lie at a distance of 2.52×106 ± 0.14×106 ly (1.594×1011 ± 8.9×109 AU) and the whole Andromeda Galaxy at about 2.5×106 ly (1.6×1011 AU).[4] This new value is in excellent agreement with the previous, independent Cepheid-based distance value. The TRGB method was also used in 2005 giving a distance of 2.56×106 ± 0.08×106 ly (1.619×1011 ± 5.1×109 AU).[5] Averaged together, these distance estimates give a value of 2.54×106 ± 0.11×106 ly (1.606×1011 ± 7.0×109 AU).[a] And, from this, the diameter of Andromeda at the widest point is estimated to be 220 ± 3 kly (67,450 ± 920 pc).[original research?] Applying trigonometry (angular diameter), this is equivalent to an apparent 4.96° angle in the sky.

From the above quoted text from the Wikipedia article, the very last sentence is particularly important. This sentence is not supported through citation and it seems that this sentence has been added by some curious individual who actually applied trigonometry on accepted diameter and distance of Andromeda galaxy and found that angle of view on sky should be 4.96°instead of official and observed value of 3.167°. Therefore we now put the value of 4.96° in our straightforward geometry for the case of Andromeda and note that with 4.96° angle of view on sky, the calculated distance of Andromeda galaxy tallies with the official distance of 2.54 million light years.

Here we see that someone’s independent calculations using official Trigonometry have confirmed the results of our commonsense based straightforward geometric formula. It is clear that if Andromeda is actually located at distance of just 2.54 million light years then it should form a larger angle of view of 4.96°which is larger than actually observed angle of view of only 3.167°. Therefore, the official distance of Andromeda galaxy is not supported by the known diameter and angle of view on sky as the official distance defies official Trigonometry.

The case of Deep Field Image is anomalous and complex as well because this image covers galaxies located at wider range of distances along line of sight. But if ‘nearer’ galaxies or objects are included in this image whose total angular diameter on sky is just one tenth that of moon then those ‘nearer’ galaxies also must be located very far away. For example up to the distance of Coma Cluster, only 20-30 galaxies should fill this image in complete. The angular diameter of deep field image is almost 45 times smaller than that of Coma Cluster yet contains 10 times more galaxies within a very small angle on sky. Roughly there should be only few hundred ‘foreground objects’ such that the ‘foreground’ also should be located very far away. The difficulty of ‘foreground objects’ however has been greatly solved through ‘Extreme Deep Field Image’ which is actually a close-up assessment of the core or nucleus of the same Deep Field Image where 5500 galaxies are assessed but angle of view is also reduced to become almost 14th the angular size of moon. The overall implication regarding distance estimation should remain almost same. Having total 10000 galaxies with margin of few hundred ‘foreground objects’, the diameter from edge to edge has been taken only at 60 million light years which is a safe lower side estimate because official estimate of diameter of Coma Cluster with only 1000 and squeezed galaxies is 20 million light years. Furthermore, if we remove the foreground objects from the deep field image then even greater number of background objects will be exposed and total number of galaxies in the image will be increased. It is safe lower side estimate also because if we consider another perspective that distance between small dot at one edge and another small dot on opposite edge should be at least the distance between Milky Way and one of the galaxies of Coma Cluster which is actually located at 500 million light years but still lies within our cosmic neighborhood then the estimate of distance of the farthest galaxy in the deep field image reaches to almost 583 billion light years from earth. The moderate estimate can be something like 20-40% of the higher side estimate thus those farther galaxies of deep field image, with moderate estimate, may be located at distance of 100 to 200 billion light years. We can attempt to get more precise result on our moderate estimate of distances. The Deep Field Image officially contains 10000 galaxies that include many foreground larger galaxies also. If we remove foreground galaxies then even more small looking galaxies are expected to be revealed from behind the foreground objects. But for the sake of our moderate estimate, we say there are only 10000 background small looking galaxies. From edge to edge, only 100 galaxies (each 80000 light years across) exist and galaxies are separated by the moderate distance of two million light years each. With these settings, we get edge to edge diameter of 208 million light years of the background visible extent of Hubble Deep Field Image. With this precise moderate estimate of diameter, the distance of those farthest visible (small looking) galaxies in Deep Field Image comes at 238.254 billion light years. Even at safer lower side estimate of 68 billion light years, this is serious discrepancy of the standard model where the viewable galaxies must not cross the distance of 13.2 billion light years only to remain conforming to the standard age of the universe of 13.8 billion years.

With huge distance scale of many hundred billion light years, the farthest galaxies ‘look’ small due to obvious reasons. But NASA loves to tell us that those galaxies are actually smaller in size and their ‘standard’ reason is also obvious because at distance of only 13+ billion light years, large galaxies should not have appeared so small. NASA very conveniently informs us that earlier galaxies are actually smaller in size in following words[x]

When we look at very distant galaxies, we see a completely different picture. Many of these galaxies tend to be small and clumpy, often with a lot of star formation occurring in the massive knots. 

In my opinion, the farthest visible galaxies, being located at distance scale of many hundred billion light years are typically very large galaxies as smaller ones simply could not be seen from such huge distances. NASA insists that they are smaller in size only to project them on a little and unrealistic but ‘standard’ distance scale of just 13+ billion light years. When a galaxy actually located at distance of many hundred billion light years is declared to be located at only 13+ billion light years then ‘yes’ it is smaller in size and may also be ‘half manufactured’ sort of. In case background small looking galaxies of Deep Field Image are located at 13.5 billion light years then edge to edge diameter of background visible extent of this image should be only 11.8 million light years which is almost equal to the diameter of our small local group[xi] that contains only three large galaxies along with just 50 other dwarf galaxies. Wikipedia article about current record holder of farthest galaxy[xii] informs us that diameter of this farthest galaxy ‘GN-z11’ is only 25000 light years. If diameter of visible background extent of Deep Field Image is only 11.8 million light years then from edge to edge there are 100 small galaxies each having diameter of only 25000 light years and each separated by distance of only 93000 light years and result would be what NASA wants to tell that those background smaller galaxies are located at distance of only 13+ billion light years. These results do not match with the actual Deep Field Image whose careful glimpse reveals very sparse density of galaxies such that edge to edge smaller looking galaxies are seemingly separated by more than our previous moderate estimate of 2 million light years each. In fact the claim of standard model cosmology that early universe was ‘denser’ is not actually confirmed as farthest galaxies in Deep Field Image are not denser than our local density of galaxies. For this reason official people often say that early ‘dense’ universe can be seen in CMB only because ‘early’ galaxies do not show the desired high density. The actual background small looking galaxies are in fact very large galaxies and from edge to edge they are separated by very large distances – far more than our previous moderate figure of 2 million light years. By no means can they fit within diameter of only 11.8 million light years and thus by no means they can reasonably demonstrated to be located at distance of only 13+ billion light years.  

The primary objective of this book is not to highlight the discrepancies of the Big Bang Cosmology. Basically two reasons persuaded me to include these anomalies in this section of the book – firstly the readers should forget the so-called ‘anomaly’ of dark matter and should think about the actual anomalies. Secondly, it seemed appropriate to repeat the pattern of Fritz Zwicky in presenting apparently out of context anomaly within the discussion of a separate topic. As far as clusters of galaxies are concerned, there is no genuine anomaly of ‘dark matter’ because redshifts do not mean ‘radial velocities’ and actually there is no ‘velocity dispersion’ within the cluster; Zwicky was also trying to assert the same thing. The actual anomaly whose hint comes from study of cluster as a whole is the anomaly in the official distances of astronomical objects because they extensively differ from the distances that can be calculated quite easily by employing simple technique of geometry. The discrepancy starts right from Andromeda Galaxy thus any excuse of the ‘curved spacetime’ over very long distance will not work. Actual and strict finding of Edwin Hubble was only that there is linear relationship between ‘redshift’ and ‘distance’ and to be precise, for Hubble, the reason of redshifts is not known[xiii]. But unfortunately, official science has adopted ‘velocity’ as a valid reason of redshifts. With redshifts being interpreted in terms of ‘velocity’, the formula of those redshifts contains ‘c’ i.e. the value of speed of light. With ‘c’ included in the formula, ‘v’ (velocity) will never reach closer to ‘c’ or the results will be twisted may be in some other way.

Now within next few years[xiv], NASA is going to launch James Webb Space Telescope which is said to be 100x more powerful[xv] than highly successful predecessor Hubble Space Telescope (HST). The strange aspect is that despite 100x power of upcoming new space telescope, NASA is dead sure that no galaxy beyond 13.6 billion light years will be seen[xvi]. NASA explains that Big Bang occurred 13.8 billion years before – although the upcoming telescope will not be able to see Big Bang itself but the very first galaxies belonging to the distance of 13.6 billion light years will be resolved whereas nothing will be seen beyond that distance because actually there was no light at all before that era.

The fact is only that due to twisted formulas, actually the ‘distance’ will never be shown greater than certain value. The reason behind the absolute surety of NASA that any galaxy older than 13.6 billion years will not be seen by the 100x more powerful telescope is the fact that NASA is fully aware[xvii] that limit on distance is imposed by the formula itself. Please see the following table of different values of redshifts (Z) and corresponding distances of galaxies in light years:

Redshift-distance relationship that should be expected to be tabulated here in simple linear format where certain increment in redshift should result in regular (linear) increment of distance, actually has been implemented in a twisted form such that with increase of value of ‘z’ (redshift) after the value 2, there is decreasing trend of distance which means that distance is not being increased properly in official tables. For example with increment of 1 in the value of z from 1 to 2, the corresponding increment in the distance is almost 3 billion light years. But afterwards with the increments in z from 4 to 5 to 6 to 7; not a single billion light year is incremented on the distance scale. Clearly this is the consequence of including value of ‘c’ in the formula of redshift. At redshift 10, galaxies are ‘moving away’ at speed close to ‘c’. When receding speed (if really receding) of galaxy will further approach towards ‘c’, the galaxy will no more be visible. While the formula intends to restrict visibility within the range of below luminal receding speeds but another factor is on the play. The sort of cosmic horizon beyond which HST cannot see is not actually determined by receding speeds of galaxies because galaxies are not in fact receding away like that. Actually there is region beyond (official) 13.2 billion light years where galaxies are considerably redshifted to near infrared zone that HST cannot see. James Webb Space Telescope is able to see infrared portion objects but that also has limit. With these hard compulsions that come mainly from calculation methodology, NASA conveniently asserts that beyond 13.6 billion light years, there will be complete darkness and the darkness will be due to absolute absence of galaxies. Galaxies did not exist prior to 13.6 billion light years and the Big Bang Theory is directly confirmed through a powerful telescope, even before the launch of the telescope. With too expensive project of prestigious space telescope that is not even going to have long functional age, the maximum they are going to find or deliberately want to show is that galaxies do not exist beyond the distance of 13.6 billion light years; age of universe i.e. 13.8 billion years is correct; Big Bang Theory is therefore ‘confirmed’ at ‘observational level’. Furthermore, we have already seen in first chapter that these formulas serve as colored spectacles and result is that if real or even hypothetical galaxy is located beyond 14 billion light years, the formula will tell that it is not located at distance more than 13.6 billion light years. So here need is to look at the reality with clear objective eyes and vision which is not contaminated by the colored spectacles.  

MS. Tamara M. Davis is an official voice who tells these things slightly differently. In a paper titled “Superluminal Recession Velocities”[xviii] she and co-author write that official redshift formulas are taken within the context of Special Theory of Relativity (SR) that requires that visibility of galaxies should stop when ‘v’ becomes equal to ‘c’ i.e. when receding velocity equals velocity of light, then the galaxy permanently goes out of sight.

Thus, galaxies with distances greater than D = c/H are receding from us with velocities greater than the speed of light and superluminal recession is a fundamental part of the general relativistic description of the expanding universe. This apparent contradiction of special relativity (SR) is often mistakenly remedied by converting redshift to velocity using SR.

Being Physicists who prefer General Relativity (GR) over SR and who are straightforward in their assertions, the authors of this paper reveal the secret that galaxies having value of redshift more than 3 are actually receding away at superluminal speeds.

Here we show that galaxies with recession velocities faster than the speed of light are observable and that in all viable cosmological models, galaxies above a redshift of three are receding superluminally.

Afterwards this paper proceeds to explain the mechanism by which galaxies ‘recede away at superluminal velocities’ but still remain visible in terms of ‘curved spacetime’ model of GR.   

Now we come back to our book where the point is not SR or GR. Cosmic Redshifts do not in fact mean ‘velocity’; galaxies are not moving away at all. The actual fact is that galaxies having redshift more than 3 are located beyond the official time of Big Bang. Confidence of NASA that even 100x powerful telescope will not be able to see anything beyond the distance of 13.6 billion light years indicates that NASA is fully aware that limit is imposed by the formula itself. That is, even if they find lot of galaxies located at very far off actual distances, they will conveniently say the distance is not more than 13.6 billion light years by showing the ‘calculated’ distance as proof.

Mr. Marco Pereira[xix], MSc (Nuclear Physics), PhD (Physical Chemistry) and a Professor of Molecular Bio-Physics has also noted the anomaly of non-linear ‘observations’ of redshift-distance in Sloan Digital Sky Survey (SDSS) data[xx]. He claims to have found fourth spatial dimension in the Universe through his self-created theory of ‘Hypergeometrical Universe’[xxi]. He claims that his theory has rightly predicted non-linear redshift-distance pattern of Super Novas 1a which is actually observed by SDSS.

Image added with permission of Mr. Marco Pereira

Above type of non-linear observed redshift-distance relationship is claimed to be rightly predicted by his theory of Hypergeometric Universe and the same is said to be the proof for the existence of extra spatial dimension of the Universe. We have noted already that actual non-linear redshift-distance relationship is something which mainstream physicists avoid to mention and only few people like MS. Tamara M. Davis would dare to expose this kind of secret. Our finding is that Mr. Marco Pereira has not found some reality which was already not calculated by Special Relativity but he does reach to a position which is normally not told openly by the mainstream physicists. After sensing this anomaly, MS. Tamara M. Davis rejected SR based calculations and favored GR based explanation. After finding that same anomalous looking SDSS observations are consistent with his Theory of Hypergeometric Universe which accommodates SR formulas in its development, Mr. Marco Pereira declares that SDSS observations are the proof of the existence of extra fourth dimension of the Universe. Underlying fact is that SDSS has also calculated distances of 1a Super Novas using the formulas of Special Relativity. The Lorentz Transformation factor is the reason behind non-linear plotting of this data. According to simple Hubble Law, the plotting should have been linear. If it is actually linear which becomes possible if we remove Lorentz Factor from the formulas of redshifts then distances of farthest visible galaxies come at the scale of many hundreds of billions light years which are consistent with direct simple geometric calculations of distances. The ground for removing Lorentz Factor from formulas of redshifts is the fact that redshifts do not actually represent velocities. But if redshifts do represent velocities (i.e. the position which is official but not likely) then non-linear plotting of real redshift-distances data is actually an anomaly that can rightfully be accounted for by proposing a fourth dimension of the Universe. In case Universe is actually expanding and farther galaxies are more redshifted at those distances which are lesser than the expected linear distance then extra redshift might have been accumulated during the course of passage of those galaxies through the “fourth dimension” proposed by Mr. Marco Pereira. The result is that either farthest visible galaxies are located at the distance scale of many hundred billion light years or Mr. Marco Pereira may be right in his proposal of extra fourth dimension of the Universe.

But reality is not that complex as suggested by Mr. Marco Pereira who only apparently favors simplicity by attributing his complex theories as conforming of Occam’s razor[xxii]. The proof of the assuredly far greater distances of astronomical objects as presented in this section is as straightforward as it can get and thus conforms to the Occam’s razor in true sense. Readers are requested to recalculate these distances by themselves and also recheck the results with official Trigonometric Formulas whose results, with lower side estimates of diameters involved, are only slightly different as given in table below.

It is stated already that Expansionist regime is not entirely blank about these anomalies. They know these things and they hide the actual things by presenting them within twisted terminology and formulas of their favored framework. The Wikipedia article titled “Angular diameter distance”[xxiii]contains following important, though twisted, confession about this topic:

However, in the ΛCDM model (the currently favored cosmology), the relation is more complicated. In this model, objects at redshifts greater than about 1.5 appear larger on the sky with increasing redshift.

This is related to the angular diameter distance, which is the distance an object is calculated to be at from ɵ and x, assuming the Universe is Euclidean.

We have seen already that in official tables, with the increase of redshift, the increment in the distance scale becomes shorter and shorter. Formula tells that astronomical object is located at nearer than the actual distance and thus the object ‘appears’ (within standard model) larger on ‘sky’. Appearance on sky of anything does not depend on any model. If something is looking larger on sky within Lambda CDM model, then it only means that calculated size of object is larger than what can be actually observed on sky. We also have seen earlier in this section that just how Andromeda ‘appears’ larger on sky. This confession, though made in twisted words, automatically validates, in principle, the calculations about tremendously larger distances of visible galaxies presented in this section. Therefore the only issue remained unsettled so-far is to check whether Universe is really Euclidean or not. The dilemma of the official cosmology is that now they have reached to the finding that at least observed universe is flat and thus the actual geometry of the observable universe is Euclidean. In a flat universe which is representable using Euclidean geometry, the two parallel lines will always remain parallel no matter how great distance is covered. To a question “Is the universe really flat, or is it just very slightly curved?” – Mr. Erik Anson, Physics/Cosmology PhD student (University of Washington) provided following insightful reply[xxiv]:

Yes, it’s entirely possible that the Universe is only almost flat on large scales, as is acknowledged by the (scientific)[xxv] community. There is a cosmological parameter, Ωk, that relates to the amount of large-scale curvature, and observations can constrain it to be within a small range including zero, but can never show it to be exactly zero.

However, if there is any curvature, it’s so small that it’s effectively irrelevant, so we may as well model it as flat (which is simpler) unless and until we know otherwise.

Symmetry magazine[xxvi], which is a joint publication of ‘Fermi National Accelerator Laboratory’ and ‘SLAC National Accelerator Laboratory’ published an article titled “Our Flat Universe – Not a curve in sight, as far as eye can see”[xxvii] on date 07-04-2015. The following introductory lines say it all that observable universe is actually found out to be flat and thus representable in Euclidean geometry:

Mathematicians, scientists, philosophers and curious minds alike have guessed at the shape of our universe. There are three main options to choose from, in case you’d like to do some digging of your own:

The universe could be positively curved, like a sphere.

The universe could be negatively curved, like a saddle.

The universe could be flat, like a sheet of paper.

As far as scientists can tell, this third option is correct. But what do people really mean when they talk about “flatness”? Your high school math teacher would be overjoyed to tell you that it’s all about geometry.

In a flat universe, Euclidean geometry applies at the very largest scales. This means parallel lines will never meet, and the internal angles of a triangle always add up to exactly 180 degrees—just like you’re used to.

In Lambda CDM model, as we have seen, the distances of far off galaxies are not the actual physical distances as they are superimposed and artificially constrained by the twisted formulas. In simple geometric calculations of distances, there is no artificial or twisted superimposition at work and thus actual distances of visible galaxies really are on a much larger distance scale than could be permitted by the standard model which means that the actual physical reality is not truly ‘modelled’ by this ‘standard model’. With extremely greater distances of astronomical objects, the problem is not that there should be more than observable matter; the implication is that density of matter within the universe is far lower than the available assessments of the so-called standard model who has false claim of having explained all the observed reality because the model does not even know the right density of present day Universe and still claims to know all the details of minute fractional parts of so-assumed very first moment after the ‘Big Bang’. Secondly, it is due to ‘velocity’ interpretation of redshifts that whole need of using ‘c’ in the redshift formulas arise. It is value of velocity of light ‘c’ which compels science authorities to stay blind with wrong lower side estimates of distances of remote galaxies. Velocity interpretation of redshifts inescapably leads towards flawed calculations of the distances of those galaxies which is sort of mathematical proof that redshifts do not represent receding velocities of galaxies because with velocity interpretation, ‘c’ will be added in the formulas of redshifts; consequently the estimates of astronomical distances would be bound to be outright deceitful as no galaxy will be shown located beyond a certain distance. And although better estimates of astronomical distances have been presented in this section but this book will keep on referring to the distances of remote galaxies with ‘standard’ values or estimates unless otherwise specified.

[i] , and diameters of Sun and Moon taken from simple google search.

[ii] Britannica article on Coma Cluster says that diameter is 25 million light years. [The main body of the Coma cluster has a diameter of about 25 million light-years]. Wikipedia article seems silent on diameter but google search show up from reference of Wikipedia that diameter of Coma Cluster is 20 million light years.  

[iii] “Angular Diameter Distance” section on this Wikipedia page:

[iv] “Distance Estimates” section of Andromeda article on Wikipedia —




[viii] Based on comparison of section of sky of deep field image with size of moon on this and many other pages:






[xiv] Right now, the space telescope is expected to be launched in year 1921.



[xvii] I had little conversation (over Internet) with a NASA Information Officer. I told him that official galactic distances are underestimated. I did not explain or prove my point; just told that proof will come with book (i.e. this book). He denied the existence of any such anomaly and also denied that NASA is trying to hide this anomaly. To this I replied that even then I will write in my book that NASA fully knows and only hides this fact because otherwise it will be more embarrassing for NASA. Readers should judge by themselves whether or not NASA knows about it given the simple fact that their formulas do impose upper limit on distance of galaxy.


[xix] profile of Mr. Marco Pereira –






[xxv] Bracket (scientific) added by me.



Einstein tried to explain his idea of Universe in year 1917.

His Universe was flawed. Static and finite universe. Stars at cosmic distances were relatively static. There was no movement of anything at cosmic scales.

Finite cosmos and no movement induced in him the fear that universe will collapse under gravity. Therefore he introduced cosmological constant in his equations so that universe may not collapse.

His universe had constant curvature – the idea which now has been discarded through modern observations. Einstein also miserably failed in acknowledging the existence of galaxies and thus he also failed in knowing that stars (in galaxy) were is movement faster than what his theory could ‘predict’.

Like many other outdated primitive ideas, Einstein also had one such theory of Universe.

This post is in response to a point of view that starting point of Western Philosophy is Socrates because this era was the start of reason and that study of Philosophy should be started from this era.

My response is that Philosophy makes no sense if we do not start with earlier period that was characterized by superstitions and mythology. Age of reason has come as response or reforms to those earlier unreasonable practices. Following is a summery of different stages up to Aristotle:



Dionysus … the god of vine.

Orpheus … The proponent of Dionysism …

According to him matter is prison of soul.

Soul must meet the ultimate reality (might be the same god Dionysus).

The method is drinking vine and inducing the spirit of Dynonisus in them.

So purpose achieved…!!!



Pythagoras … a REFORMER of Dionysism.

Purpose is same.


Now method is NOT drinking vine.

Now method is using Intellect and intellectual thought.

That is BIRTH of REASON.




A follower of Pythagoras and devotee of reason.

matter was previously the prison of soul.

Now again … matter is a bad guy. ILLUSION.

According to REASON, reality is not in observable matter.

Only reason can find the UNCHANGABLE reality.



Reaction to Parmenides using same reason.

Atomism. Reality of matter affirmed.



Reason becomes a standard method of inquiry.

Sophists and Sophistry.



Sophistry method is not clear. In order to refer to things or phenomenon, we must use definitions and concepts.

An important concept added to reason by SOCRATES.



PLATO … reality is in reason. The point of Prmenides.

Parmenides said whole reality is UNCHANGEABLE and matter is ILLUSION.

PLATO said every individual thing of experience has UNCHANGING IDEAL ‘UNIVERSAL’ counterpart.

Those ideal universals are real. Observed things are shadows (a form of illusion).




…. Method of reason must be refined. There must be rules of accuracy. Theory of Logic.

What I want to say … reason itself was not discovered at this stage. Rules of accuracy of reason were discovered at this important stage.

The question is what is the purpose of reading Philosophy?

Is the purpose to understand Philosophy?

Or to understand reality?

If understanding of philosophy is pursued, then my be, studying 2400 years of philosophers from the time of Socrates should be the approach.

If understanding of reality is the task … then we can even start from Frazer. Who will take us to most ancient times where humans started using brain in whatever form. Off course, that path will lead only towards reality of consciousness and mind.

Reality of matter must be judged from personal experiences and reflections and off course by taking help from reading previous or present relevant Philosophers and scientists.

The question was can you define Immanuel Kant in one sentence?

Immanuel Kant

So following was my one sentence definition of Kant:

One who accumulated jumble of shallow details to legitimize metaphysics by attributing it to intelligible knowledge, by not treating it but by calling it ‘a priori’ knowledge and by saying that sensible world is created by that ‘a priori’ knowledge.

Big Bang Theory is invalid theory in the capacity of a “scientific theory”.

Modern cosmologists now try to get rid of the term “big bang” by saying that it was derogatory term coined by Fred Hoyle etc.

They do affirm at least two things and insist that these two things are scientifically valid. Following are the two points:

  1. That Universe is Expanding and;
  2. Early Universe was hot and dense.

‘Big Bang’ is not a recognized theory as such. But above two points constitute a recognized theory which, in detailed mathematical form, is often termed as “Lambda-CDM Model”.

Both of these two points are actually without any scientific proof. Expansion is said to be derived from observed redshifts. The point is that the observed redshift is not Doppler’s Effect. It has different name i.e. Cosmological Redshift.

Unlike Doppler’s effect, Cosmological Redshift itself does not constitute the proof of receding of anything from us. In the absence of clear Doppler’s Redshift, there is need of direct evidence that galaxies are in fact moving away. But there is no direct evidence. For example they say that galaxies are moving away and at greater distance, same galaxies will become more redshifted.

Now direct evidence can be in the form that we may note increase in redshift over time. We have 100 years old data of redshift of many galaxies and increase in redshift over the period of 100 years has not been recorded. Let’s say the farthest visible galaxy is moving away at speed of light and its redshift value now indicates distance of 13.3 billion light years. After one million years, we shall be able to note slight increase in redshift that will indicate a distance of 13.31 billion light years.

Fact is that … direct evidence of receding of galaxies is not available now and it can be available, if galaxies are really moving away, only after hundreds of thousands of years.

My point is that … till then please call it a philosophical theory. When after many hundreds of thousands of years you will eventually get the direct evidence, then I also will call it a scientific theory. But before that time, it is Philosophical or even Mythical.

They also say that CMB is proof of expansion. I say CMB is not the proof of expansion. They only have ‘interpreted’ CMB in terms of expanding universe.

CMB can also be ‘interpreted’ in terms of non-expanding infinite universe. In this case, there should be daytime brightness in terms of Olber’s paradox. I say that CMB is that daytime (redshifted to invisible spectrum zone) brightness and thus it is proof of infinite and static universe.

They do not even have any explanation of CMB if universe is not expanding.

They also say that abundance of light elements in far-off (early) universe is also proof of expansion.

I say this is also not the proof of expansion. Let’s say Universe was infinite and by the time of their so-called Big Bang there was only universal fog of Hydrogen (may be including other light elements). In that case, first galaxies started to emerge by way of gravitational condensation of already existing fog at the standard accepted time and thus again abundance of light elements has been explained in a non-expanding universe. In fact, abundance of light elements is not any proof of expansion. If correct, then it is only proof to the fact that at earlier time, everywhere throughout infinite vastness, there were only light elements.

In short, fact is that so-far there is exact zero proof of this expanding universe theory. When after many hundred of thousands of years they will get the direct evidence, then I will be the first to accept that yes expanding universe theory is a scientific theory.

Thanks to Encyclopedias and other secondary sources through which we may know what Wilhelm Friedrich Hegel was saying. His original writings are almost not readable.

My assessment is that Hegel does not expect anything from or care anything for the reader. He is himself expert of Socrates, Plato, Aristotle, Descartes, Spinoza, Locke, Leibniz, Hume and Kant etc. and he is talking to himself. He is not even trying to explain his points to readers. His context is his own mind and only he knows where he is talking from. It seems like he was writing diaries for record of himself alone and then published those diaries.

Off course it may be possible to reach to his context by first becoming expert of all those Philosophers as stated above. It also may be equally correct to say that he is writing only for top experts of the field or for advanced students. But he himself does not state this ambition. Based on my attempt of reading him from translation of original writings coupled with what secondary sources inform us about him, my understanding is that he is idealist and talks in terms of universals. His dynamics are dynamics of ideas alone. Progress in ideas has nothing to do with observation of Phenomena though title of one of his books is “Phenomenology of Spirit”. Progress in ideas takes place only due to internal conflicts of ideas. Sometimes it seems that he also favors empirical approach and talks about real things that exist around us, but a closer look may reveal that he talks about them in the sense of “universals”.

Hegel’s Dialectic is a closed Rationalist Idealism. There is already an idea, a ‘thesis’ (thesis_anti-thesis are perhaps interpretations to be found only in secondary sources but do offer convenient approach for describing what Hegel was actually saying). Progress in ideas will come from within this thesis as the same thesis will give rise to anti-thesis. This is closed system because there is no role of outsider fresh information in the process of up-gradation of ideas. There is no role of phenomena as well. This is Idealism because ideas alone give rise to further ideas and there is no role of material world. It is Rationalism because only an internal logic of ideas i.e. meta-logic determines the direction of ideas. This is extreme Rationalism like of World of Ideas or Universalization that of Plato.

There are lot of fishy things going on in Modern Physics. Textbooks on Physics as well as all the official sources of Physics inform us that second law of Newton is F=ma (or modern form of F=dp/dt).

Anything questioning this stance is straight regarded as crack-pottery. But I dared to question this. I have had intense debates with experts on this topic many times.

Here I choose to not go into the details. Topic is lengthy and I should write a book on this subject. Here I am only telling that recently I had debate with a PhD Physics person. When I sufficiently showed to him that in fact Newton did not say F=ma and that what actually he was saying can be described as F=mv.

That PhD Physics person then had to say following:

The fact is, Newton was not quite as careful and precise with words and definitions in 1687 as modern science and mathematics (and yes, textbooks) demand.

09-07-2019 – By a person “PhD in Theoretical Physics”.

The brief background is that I confronted him that Newton did not say F=ma; instead he said F=mv.

Now he tried hard to prove that Newton in fact said F=ma.

But I sufficiently proved my stance that in fact Newton was saying F=mv instead of F=ma.

At this point … not only he … the experts in general tend to unduly favor textbooks stance. They do usually come to the point that … so what if Newton carelessly stated his law in a way that cannot be mathematically described as F=ma. But Textbooks reached to the better truth of F=ma which has passed ‘all the tests’.

My demand from them is that then please stop calling second law of motion as ‘Newton’s Second Law of Motion’.

If task is to present correct information in textbooks, then please inform the students that originally Newton presented F=mv. But textbooks reached to the better position of F=ma.

Above is their accepted truth that they do not openly present.

What they do not accept so far is that Newton was right in saying that F=mv and textbooks are wrong in the formulation of F=ma.

Yes … in my opinion … Newton was right. F=ma is a wrong formulation.

My demand from Science authorities is that please rename this law as “Euler’s Law of F=ma”.

I share following quote from Stanford Encyclopedia’s entry about Newton. The quote is saying that F=ma formulation is not traceable from within Principia. This quote also tells the name of person (Euler) who made this formulation F=ma as part of academic culture. This quote is also saying that textbook “Newtonian Physics” is actually “Euler’s Physics”.

Therefore my demand is … Instead of calling it (second law) Newton’s Law … Please call it Euler’s Law.

Isaac Newton (Stanford Encyclopedia of Philosophy)

Euler was the central figure in turning the three laws of motion put forward by Newton in the Principia into Newtonian mechanics. These three laws, as Newton formulated them, apply to “point-masses,” a term Euler had put forward in his Mechanica of 1736. Most of the effort of eighteenth century mechanics was devoted to solving problems of the motion of rigid bodies, elastic strings and bodies, and fluids, all of which require principles beyond Newton’s three laws. From the 1740s on this led to alternative approaches to formulating a general mechanics, employing such different principles as the conservation of vis viva, the principle of least action, and d’Alembert’s principle. The “Newtonian” formulation of a general mechanics sprang from Euler’s proposal in 1750 that Newton’s second law, in an F=ma formulation that appears nowhere in the Principia, could be applied locally within bodies and fluids to yield differential equations for the motions of bodies, elastic and rigid, and fluids. During the 1750s Euler developed his equations for the motion of fluids, and in the 1760s, his equations of rigid-body motion. What we call Newtonian mechanics was accordingly something for which Euler was more responsible than Newton.

Stanford Encyclopedia is acknowledging that F=ma formulation appears nowhere in Principia.

Anyways, when experts do say that Newton was careless as he should have said F=ma which he failed and they say this thing only after finding that there is no way to escape, then my genuine demand is that please rename this law as Euler Law and stop calling it Newton’s law.

If they do not rename this law and keep calling Newton careless when they themselves fail to defend the stance that Newton has anything to do with F=ma thing … then their act of calling Newton as careless is kind of “Cat out of Bag Situation”.

Relativity Theory has survived more than 100 years and Big Bang Theory will also somehow complete its century. How powerful these theories are! We are told that scientists always struggle to devise (costly) methods to prove them wrong but every time these theories pass the test with flying colors.

Here I only say that they did not devise multi-million LIGO project only to disprove Einstein. There is no “Einstein was wrong” business in market. It is “Einstein was Correct” business which exists in market. Here I present two examples of my own interactions with representatives of science authorities that show that actually they have imposed an impassable barriers to ensure that their beloved theories may not be proven wrong by anyone.

First example is my interaction with a National level scientist (or representative of science) of my country in year 2007 when he told me that Editors of renowned science journals do not even bother to read any submission critical of Relativity Theory. Those submissions straight go to dustbin without anyone even read them:

Special Relativity is a closed case. We teach it to high school kids these days. The editor of the American Journal of Physics told me 30 years ago that he receives 10 attempts a month to question SR and dumps one as soon as he sees it.

Feb-04, 2007.

Yes above discussion related to only Special Relativity. But we can guess that they might have done the same for any criticism of GR as well. But so far there is no hint for the “impassable barrier” which is the topic of this blog post. This hint mainly comes from personal experiences – when you try to criticize these official theories. However, my following interaction with a NASA Information Officer is capable to convey this hint to the general readers as well.

Sometime back I had conversation with a NASA Information Officer.

NASA Information Officer:

Nobody stops you from throwing GR and QM out if the window. But if you do so you have to cone up with an explanation fir ALL things these twk theories explain perfectly.

(This NASA officer was occasionally writing incorrect spellings. But this is not the point.)

My response:

Obviously all things cannot be thrown out with a single published paper. Now logic becomes.
Since a single paper cannot throw out all the things so that single paper will be rejected. Therefore second paper will also not be published.
And thus … actually no criticism of basic frameworks can ever be published through this peer review system

NASA Information Officer:

The big bang theory made many predictions which were all observed and confirmed. If you provide a new theory it is therefore necessary to explain all these observations. You can write a paper on a static universe.
You only have to explain redshift, cmb, finite age of stars, galactic evolution, absent of black dwarfs, elemental abundances, large scale structures,… and yes, in one paper.

My response  was:

This is unfair. Standard model has not developed all these concepts through a single paper. Secondly all these things are ‘confirmed’ only through particular interpretations.
All these things can be interpreted in better way. But it is not fair to demand all the reinterpretations from a single paper. ……..

Thus apparently they do tell us that they devise expensive experimental projects to critically test these theories but actually they only intend to prove these theories correct. Apparently they do tell us that a single falsification will render these theories to dustbin but actually all the attempts to falsify straight go to dustbin without anyone actually read them. They have constructed (in their minds) sort of impassable barriers that any critique must pass. And I confidently assert here that even if it is done by anyone, they will come up with additional excuses that this or that thing is not covered etc.

And … do I intend to pass this impassable barrier?

Well, I do not intend to write a single paper or book which will falsify all the aspects of their beloved theories. Their theories do contain partial truth due to which apparently they do pass all those tests. But “Universe is Expanding” is not truth. “Dark Matter” and “Dark Energy” also do not exist. Gravity is not curvature of spacetime. Actual Newton is different from textbooks Newton such that actual Newton is more accurate. I am not going to explain all these things in a single paper or book. But I do will explain all these things here and there somewhere. I do not intend to pass any impassable barrier. Let them try to throw me in dustbin without even reading me. And let me try to survive those dustbins.

We live in a physical world that behaves mathematically. We do not live in a mathematical world that manifests itself physically.

Physical reality comes first. Equations do not govern physical reality. The physical reality can be approximated into the form of equations. Physical reality also can be approximated into the form of logical propositions like Newton’s first and third laws of motion. Newton has described even second law also in the form of logical propositions and without mathematical formulation.

We start from Wikipedia’s explanation of ‘Metric Expansion of Space’:

“The metric expansion of space is the increase of the distance between two distant parts of the universe with time. It is an intrinsic expansion whereby the scale of space itself changes. It means that the early universe did not expand “into” anything and does not require space to exist “outside” the universe – instead space itself changed, carrying the early universe with it as it grew. This is a completely different kind of expansion than the expansions and explosions seen in daily life. It also seems to be a property of the entire universe as a whole rather than a phenomenon that applies just to one part of the universe or can be observed from “outside” it. Metric expansion is a key feature of Big Bang cosmology, is modeled mathematically with the Friedmann-Lemaître-Robertson-Walker metric (FLRW Metric) and is a generic property of the universe we inhabit. However, the model is valid only on large scales (roughly the scale of galaxy clusters and above), because gravitational attraction binds matter together strongly enough that metric expansion cannot be observed at this time, on a smaller scale.”

In short, according to official sources, ‘Expansion of Space’ stuff is rooted in FLRW metric. It is said that before the discovery of ‘redshift-distance’ relationship in light coming from far off galaxies in 1929 by Edwin Hubble, (F) Friedmann (1922) and (L) Lemaître (1927) already had described ‘Expansion of Space’ in their respective works.

Before presenting the actual points of (F) Friedmann (1922) and (L) Lemaître (1927), let me first share point of view of a famous Internet Physics writer Mr. Victor T. Toth on this topic. Following was his reply dated December 01, 2017 to a question:

Big Bang theorists do not claim that space is “created physically”, whatever that means.

Big Bang theorists do claim that things, on average, recede from each other; that the distance between things is therefore increasing, on average; and that correspondingly, the metric of spacetime evolves as governed by Einstein’s field equations.

None of this implies space being created, “physically” or otherwise. (For starters, space is not a measurable, tangible concept, nor is it a conserved physical quantity. When you measure “space”, what you actually measure is the distance between things, not space itself, which is intangible.)

In above answer, Mr. Victor T. Toth is saying that Big Bang Theorists do not claim that space is expanding or being created or anything like that. But in another recent answer, he is accepting that Big Bang Theorists do say like that and thus he is showing his disagreement with those Theorists:

Not for the first time, allow me to be the contrarian here and challenge my esteemed colleagues who are telling you that space is expanding, by making three (to me) rather important points: (i) What is this “space” that is expanding? How do you measure it? Where are its little markers to which you can attach your measuring tape? And exactly how is this “space” represented in the Friedmann equations? (ii) Speaking of which, if it was space expanding, how come I can derive (see, e.g., books by Weinberg or Mukhanov) the aforementioned Friedmann equations purely in the context of Newtonian physics, with its concept of absolute space and time? (iii) Last but not least, when gravity brings expansion to a halt, how does it do that? Is it somehow acting on “space”, as opposed to acting on matter? (See also Peacock’s Cosmological Physics.)

No, space is not expanding. It’s not even something we could measure if it did. The Friedmann equations contain two entities: matter (represented by its density and pressure) and the gravitational field (represented by one component of the very special, homogeneous and isotropic FLRW metric.)

Galaxies are moving further apart. If you could stretch a measuring tape from the Milky Way to a distant galaxy, the distant galaxy would be zipping alongside that measuring tape at quite a clip (probably several hundred kilometers a second, at the very least.) And when, in a region where matter is denser-than-average, gravity prevails, it stops those galaxies from moving away from one another.

The purpose of presenting quotes of Mr. Victor T. Toth was to show that some big bang cosmologists are already against the idea of Expansion of Space. However here Mr. Victor T. Toth is not representing the dominant opinion of mainstream big bang cosmologists who overwhelmingly think that Space is Expanding and that this notion of Expansion of Space is rooted in the works of (F) Friedmann (1922) and (L) Lemaître (1927).

Therefore, now I will show that both (F) Friedmann (1922) and (L) Lemaître (1927) did not actually talk anything about Expansion of Space and that this notion is deceptively being attributed to them by the mainstream cosmologists. Mr. Victor T. Toth already has given a hint that Friedmann equations contain two entities which are (i) matter and (ii) gravitational field and thus there is nothing like Expansion of Space in the works of Friedmann (1922).

So let us first check the Friedmann’s actual concept of space. The English Translated title of his 1922 paper is “On the Curvature of Space”. He uses terms ‘space’ synonymous to ‘radius of universe’. By the term ‘radius of universe’ his meaning is that mass contents of universe would cause gravitational boundary of universe that a straight line universal journey of a physical object would be a complete circle and would reach back to the original point. ‘Radius of universe’ is radius of this universal ‘straight’ line which is actually circular. Within this meaning of ‘space’, it is physically valid to say that space may expand or contract. Within mathematical model of Friedmann, space is really expanding or contracting according to this meaning. Following are some examples in Friedmann’s paper of usage of term Radius R as curvature of space:

“Here R depends only on x4 and it is proportional to the radius of curvature of space, which may therefore change with time.”

While deriving constant universe model of Einstein within his own general scheme,

Friedmann writes: “whereby R signifies the constant (independent of x4) radius of curvature of space.”

“If we restrict our consideration to positive radii of curvature”.

“Let the radius of curvature equal R0 for t = t0.”

“Positive or negative depending on whether the radius of curvature is increasing or decreasing for t = t0.”

“by choice of the time it can always be arranged such that the radius of curvature increases with increasing time at t = t0.”

It is now clear that yes space is contracting or expanding in Friedmann’s model but it is contracting or expanding within above physically valid meanings of contraction or expansion of space. But Big Bang Cosmologists tell us a whole different and misleading thing and they attribute their own faulty model to Friedmann. They call their own misleading model of ‘expansion of space’ as ‘metric expansion of space’ and wrongfully attribute this faulty physical model to Friedmann.

After checking the actual position of Friedmann (1922), now we come to see the actual position of Lemaître (1927) with regards to the notion of Expansion of Space.

Modern concept of Expansion of Space has actually come from manipulating Equation No.23 of Lemaître (1927) paper. Following is the snapshot of Equation No.23:

This equation can be written as V/C = (R’/R)r

The above form of equation No.23 superficially resembles to Hubble Law which is V = HD

In Equation No.23, V/C is ‘Redshift’ and in Hubble Law, V is ‘Redshift’; thus LHS of both equations are equal.

Moreover, in Equation No.23, r is Distance, so ‘r’ and ‘D’ of RHS of both equations are also equal.

Therefore, if we use the notation of Hubble Law, we can write Equation No.23 as following:

V = (R’/R)D

R means radius of whole Universe … (Radius of ‘whole’ universe itself should have been regarded as ‘cranky idea’ in first place).

Anyhow ‘R’ means radius of whole universe.

What Lemaître stated was like V=(R’/R)D

What standard ‘interpretation’ goes in every official source … books/papers etc. that is V=(S’/S)D

In short Lemaître was saying in his equation No.23 (1927) that redshift (V) is caused by increase of radius of whole universe. While distance of galaxy (D) remains constant.

Actual equation No.23 is not exact this one. If we use notation of Hubble law then equation No.23 becomes like this and superficially does resemble with Hubble law.

But unlike Hubble law where H is constant … here we have distance of galaxy (D) as constant.

R’/R … does it mean H or not?

Whether or not it mean H … it is not constant like H

This is the actual position of Lemaître .

What FLRW metric attributes to him?

FLRW metric makes this thing into V=(S’/S)D where S means ‘Space’.

Here conversion of R into S is a simple manipulation.

Lemaitre here did not say increase of Space or even increase of distance of galaxy… according to equation No.23, distance of galaxy remained the same.

This thing has been ‘interpreted’ in FLRW metric that ‘coordinate’ of galaxy remains the same and space is increasing.

In the end … after all this is a deceptive manipulation. V=(R’/R)D is NOT equal to V=(S’/S)D.

Thus we have seen and confirmed that both Friedmann (1922) and Lemaître (1927) had not coined the term or concept of Expansion of Space and that this concept or notion is only deceptively being attributed to both of them by the so-called FLRW metric.

Position is that without the notion of Expansion of Space, the Standard Model of Cosmology (Lambda-CDM) does not work and this notion itself is unreal, illogical, non-physical as well as deceptive.

For further details, please see my book “A Philosophical Rejection of The Big Bang Theory”.