Buscar en Filosofía para llevar:

30 sept 2025

5 great thinkers who rejected their own ideas

Philosophers rarely change their minds. These thinkers did — often at social and professional cost.



Credit: Public Domain / Wikimedia Commons / Big Think


Here’s a curious fact that often slips by unnoticed: Philosophers rarely change their minds. Across the history of the discipline, you could count on two hands those who openly revised their positions on major questions.

This is surprising, given philosophy’s very nature. It thrives on debate, supplying endless reasons to question one’s stance. Philosopher Will Buckingham once remarked that, if reasoned dialogue worked as we might expect, philosophers would be shifting their views “with a quicksilver frequency that would put the rest of us to shame.” After all, in a world teeming with critical colleagues and formidable counterarguments, one might imagine minds constantly turning over. The ideal philosopher, we assume, would be supple, guided by evidence, and willing to follow arguments wherever they lead — embodying the Socratic lesson that recognizing one’s fallibility is the beginning of wisdom. As Alexandra Plakias has noted, philosophical about-faces should not scandalize us; they should be honored.

Hilary Putnam, one of the 20th century’s most influential philosophers, stood out as a rare figure who treated changing his mind as a virtue rather than a failing. He even defended it openly, presenting self-revision as a mark of intellectual honesty. In 1988, in the introduction to his major work Representation and Reality, he performed one of his most striking turnarounds: renouncing functionalism, the computational model of mind he had once advanced. Anticipating disapproval, he asked why colleagues treated this habit as a flaw. Perhaps, Putnam quipped, he erred more often — while “other philosophers don’t change their minds because they simply never make mistakes.” For him, obstinacy was far worse than correction.

So, why does intellectual stubbornness so often carry the day? Buckingham suggests it begins with the sheer effort that philosophers pour into their arguments. To see that carefully constructed edifice topple like a house of cards is painful enough — even worse when a lifetime’s reputation is bound up with it. Safer, then, to keep loosing arrows at others’ positions rather than one’s own. Yet the culture of philosophy adds to the pressure. Sticking faithfully to a theory is praised as strength, a badge of consistency. In an age enthralled by certainty and being right, reversal is branded weakness. Little wonder that a philosopher who changes course invites surprise, suspicion, and sometimes the hostility of old friends.

Still, the history of ideas contains flashes of intellectual heroism: the nerve to renounce even one’s most celebrated beliefs in deference to truth. Psychologist Peter C. Hill observes that while many dismiss such reversals as flip-flopping, the simple act of saying “I was wrong” can signal resilience, modesty, and openness. In a discipline built on unrelenting argument and daring questions, the ability to shift course in light of stronger reasoning is not weakness but a philosopher’s deepest obligation. Such episodes remind us that philosophy, at its best, thrives on receptivity and the humility to change direction.

Even in antiquity, some thinkers embodied this courage. Timocrates of Lampsacus broke with his mentor Epicurus, denouncing the very school he had once embraced. His brother Metrodorus struck back with a fiery polemic, Against Timocrates. Dionysius of Heraclea, caught in a health crisis, abandoned Stoicism for the pleasures of the Cyrenaics — only to be branded a traitor by former allies who saw his change of heart as proof of weakness.

From these early renegades, we now turn to five later cases of “radical reversal”: Augustine, Kant, Marx, Wittgenstein, and Simone Weil. These were no youthful missteps corrected with time, but towering thinkers at the height of their powers overturning their own hard-won systems. Each held fast to a worldview until a crisis or revelation broke it apart. Instead of clinging tighter, they leapt into new terrain — often at steep personal cost.

St. Augustine: From cosmic puppets to the grip of grace


Augustine of Hippo (354–430) altered his philosophical course twice — and each time with seismic force. Long before he was canonized as a saint, he was a restless, brilliant young mind absorbed in a very different creed. For nine years, he devoted himself to Manichaeism, a Persian religion now vanished from history. Its vision was stark: the Universe locked in perpetual war between Light and Darkness, good and evil. In this scheme, free will was feeble, and sin could be blamed on dark forces inhabiting the body. Augustine rose quickly, dazzling as a public advocate, trading arguments with Christians and skeptics alike. Yet his private life told another story: “in love with love,” he clung to pleasure, kept a concubine, fathered a son — and reasoned it all away as the work of Darkness.

In his late twenties, Augustine entered a season of turmoil. Doubts about Manichaean cosmology grew sharper, and he slowly drew back from the dualists. For three unsettled years, he searched for a philosophy that could steady his restless mind — testing skepticism, exploring Neoplatonism, and weighing Christian teachings, already entertaining the notion of free will. Outwardly, he taught rhetoric; inwardly, he wrestled. He had parted from his mistress, reluctantly accepted an arranged marriage, and spent months in semi-monastic retreat with friends.

Then came the turning point. Weeping helplessly beneath a fig tree, he heard a child’s voice chanting, “Take up and read.” Opening the New Testament at random, he was seized by a sudden clarity: a light of serenity that dissolved doubt and gave him strength to renounce his old life.

In the summer of 386, Augustine embraced Christianity with dramatic finality, abandoning his career as an imperial rhetoric teacher and a never-realized arranged marriage. Yet this was not the last reversal. The great struggle of his thought — the tension between free will and grace — would occupy him for the rest of his life. Early in his Christian journey, eager to refute Manichaean fatalism, he championed human freedom. In On Free Choice of the Will, he argued that evil arises from the misuse of choice, not from some rival deity, and that responsibility lies with us. Sin, he insisted, cannot be laid at the feet of cosmic powers.

But over time, Augustine’s position shifted radically, driven by new opponents. The British monk Pelagius taught that humans could reach salvation unaided, by sheer moral effort. Alarmed, Augustine hardened his doctrine of grace, becoming its great theorist. By the end of his life, he insisted that without God’s unearned gift, humans are powerless to do good; even the first steps of faith depend on grace. His prayer — “Give what you command, and command what you will” — captures this conviction. Critics accused him of veering back toward Manichaean determinism. Augustine countered with nuance: free will exists, but is helpless without grace. His revisions shaped centuries of debate.

Kant: From rationalist dreams to critical awakening


Few intellectual shifts have reshaped Western thought as profoundly as Immanuel Kant’s break with the rationalist metaphysics of his youth. His career divides neatly in two: the pre-critical years, ending in 1770, and the critical period, beginning in 1781 with the Critiques that would redirect philosophy itself. Who would have expected that this measured, provincial professor — methodically rehearsing inherited dogmas — would one day upend them? In his early work, Kant embraced German rationalism, convinced that pure reason could prove God’s existence and the soul’s immortality. Yet beneath his confidence stirred unease: How much could reason truly claim as certain?

Kant’s breakthrough began as an intellectual and existential crisis. Immersed in the writings of the Scottish skeptic David Hume in the 1760s, he felt as though Hume had broken into his “dogmatic slumber,” shattering his faith in the unchecked power of pure reason. Hume’s piercing doubts about metaphysics forced Kant to see that he had been accepting, without question, comfortable assumptions — that reason could grasp causation, the cosmos, even God’s nature. Hume made him face the possibility that much of metaphysics was, in his phrase, “sophistry and illusion.”

The shock was profound. In midlife, Kant resolved to start again. He withdrew from publishing for over a decade — his so-called “silent period” — to rebuild philosophy from the ground up. Friends noticed he stopped frequenting social clubs, absorbed instead in solitary reflection. He later joked that the effort cost him his health, though he lived on, remarkably, to the age of 80.

In 1781, Kant stepped back onto the stage with the Critique of Pure Reason, a book he likened to a “Copernican revolution” in philosophy: shifting the center of knowledge from the world to the mind that knows it. No longer was the mind a passive mirror of reality. Instead, reality always appeared through the architecture of our perception. Space and time were not floating entities but the lenses through which anything can be seen at all. Cause and effect were not properties of objects but the laws our understanding uses to link events. From this radical view, we never grasp “things-in-themselves,” reality as it exists apart from us, but we can know appearances with certainty, since they are shaped in consistent ways by our faculties.

This was the birth of his “critical philosophy.” Questions the younger Kant had answered glibly were now declared insoluble. He had changed his mind about the mind itself — recognizing that truth required admitting where reason must stop. The Critique is dense, often desperate, as Kant tried to salvage what was vital in metaphysics while exposing its illusions. The book was met coolly; colleagues clung to the old certainties he dismantled. Yet Kant pressed on. With his later Critiques, he secured his overhaul. Modern philosophy begins here, at the boundary of what reason can know.

Marx: From romantic fire to the machinery of revolution


Karl Marx’s thought (1818–1883) turned with the force of a revolution, from youthful idealism to hard materialism — a shift so stark that scholars later spoke of a “young” and a “mature” Marx. In the 1840s, the young Marx, nourished by Enlightenment humanism and German Idealism, wrote with urgency about human “alienation” and the dream of fulfillment in a freer world. He pictured communism as the recovery of our “species-being,” where the human essence could finally blossom, unshackled from capitalist estrangement. Humanity stood at the heart of his vision. A fiery revolutionary in spirit, he still trusted reasoned critique to spark transformation.

Within a few short years, Marx underwent a profound conversion. Two shocks set it in motion. The first came through journalism, where he confronted the raw edges of social life: censorship, peasant misery, unjust laws. One case seared him — poor villagers punished for collecting fallen wood in aristocrats’ forests. What the wealthy dismissed as theft was, for the poor, survival. Encounters like these pulled him from “pure politics” toward economics and socialism. Abstract philosophy, he realized, could not explain or mend such wounds; he must plunge deeper into reality. With some irony, he would later recall in an 1859 preface that journalism had landed him “in the embarrassing position of having to discuss what is known as material interests.”

The second shock was intellectual: socialist and materialist ideas pouring in from contemporaries like Ludwig Feuerbach and, above all, Friedrich Engels. Engels persuaded the skeptical Marx that the working class was history’s true engine. By 1845, Marx was scorning his colleagues for floating in the clouds. At the same time, he was redefining human beings as products of social and class conditions rather than carriers of a timeless essence. In The German Ideology, he and Engels wrote that they had to “settle accounts with our former philosophical conscience,” leaving behind speculative habits. The break was so decisive that he never published his early manuscripts, judging them naïve.

The mature Marx changed his mind about how to change the world. That turn laid the foundation for Capital. The young Marx, still speaking the language of humanist ideals, gave way to the older Marx, who dissected capitalism with the scalpel of science. History, he now argued, advanced through material forces and class conflict, not lofty visions. Capitalism would not crumble before appeals to conscience; it would collapse only under the weight of its own contradictions, giving way to revolution when conditions ripened. Where the youthful Marx sketched utopias, the mature Marx turned to economics and praxis.

The cost of this shift was immense. Colleagues accused him of betraying philosophy; the Prussian state drove him into exile. He paid with career, homeland, comfort, enduring poverty, illness, and loss. Yet he stood firm, remembered for saying that “to leave error unrefuted is to encourage intellectual immorality.”

Wittgenstein: From logical perfection to living language


Ludwig Wittgenstein is the poster boy of philosophical shifts. In 1921, he published the Tractatus Logico-Philosophicus, a dense treatise declaring that the world is made of facts mirrored in logical propositions, and whatever cannot be so expressed must be met with silence. Its final line — “Whereof one cannot speak, thereof one must be silent” — sealed its austere vision. Convinced he had solved every riddle, Wittgenstein quit philosophy, calling his work “unassailable and definitive.” Then came the drama: He gave away his fortune, abandoned Cambridge for a village schoolhouse, and wrestled with guilt and depression, living in a fever of intellectual aimlessness.

Doubts soon began to gnaw at Wittgenstein’s certainty. The first crack came in 1923, when the young mathematician Frank Ramsey pointed out that the Tractatus couldn’t explain something as simple as why “a point in the visual field cannot be both red and blue.” The contradiction didn’t arise from logic, Ramsey argued, but from the texture of reality itself. Wittgenstein tried to patch the flaw, only to see the critique cut straight to the heart of his theory. The strain of rethinking left him at times feeling close to madness.

A few years later came the decisive blow: his conversations with Italian economist Piero Sraffa. In one famous moment, Sraffa brushed his chin in a Neapolitan “I don’t care” gesture and asked, “What logical form does that have?” Wittgenstein was floored. He later compared Sraffa’s influence to mining a stubborn ore vein — backbreaking work, yet rich with transformative discoveries.

After nearly nine “lost years,” Wittgenstein returned to Cambridge determined to confront what he called his own “grave mistakes.” The result was his second masterpiece, Philosophical Investigations, published after his death in 1953 — a book that flatly contradicted the Tractatus. Philosopher Lee Braver described the shift as “as radical as the move from modern to post-modern philosophy.” Gone was the dream that language could map the structure of reality. Instead, Wittgenstein came to see language as a mirror of us — woven into our practices, our habits, our forms of life. Meaning was no longer fixed by logic but by use, like Sraffa’s tossed-off chin-brush that baffled him.

The attempt to build a perfect logical language, he now warned, lures philosophers into illusions; the Tractatus itself was guilty of that folly. He even likened it to a faulty clock — impressive in design, useless in function.

That act of intellectual self-destruction bewildered some, yet over time it gave Wittgenstein a prophetic aura: the rare thinker who dared to scrap his own masterpiece in search of truth.

Simone Weil: From world revolution to soul revelation


Simone Weil may be the clearest portrait of a thinker who kept abandoning certainties in pursuit of higher truth. Born in 1909, she seemed to thrive on extremes: brilliant scholar, factory hand, political agitator, soldier, and finally a mystic flirting with sainthood. Her first identity was as a Marxist radical. Refusing the comforts of academia, she threw herself into workers’ lives, earning the nickname “The Red Virgin.” She gave away her income, joined protests, and labored incognito in factories — an ordeal that nearly destroyed her health. Her early essays, written through a Marxian lens, proclaimed revolutionary socialism as justice’s true path.

Yet Weil would not surrender her conscience to any creed. By the late 1930s, she was already revisiting her Marxian commitments, prying at their weak joints. What disturbed her most was Marx’s faith in history’s “laws,” the promise that the proletariat’s victory was inevitable. To her, this smelled of dogma no less than the Church’s certainties. Reality, she insisted, was tangled, tragic, never on a track to utopia. She also saw in communist movements the seeds of tyranny, where liberators might become new masters. Thus, she drifted into heresy on the left, holding truth above every party banner.

Then, the deeper upheaval: mysticism. In 1938, during Holy Week at the Benedictine monastery of Solesmes, Gregorian chant pierced her like light through armor, filling her with a joy so pure it seemed divine. Later, hearing George Herbert’s Love (III), she wrote, she felt “Christ himself came down and took possession of me.”

This was her point of no return: The ardent materialist and champion of the oppressed had fallen in love with God — specifically, the God on the Cross. Weil’s thought took on a new radiance, shaped by Christian-Platonist mysticism, though she never formally converted. She wrote of “decreation,” the undoing of self so that divine love might flow through. Her compassion became so uncompromising that it reached even enemies, and she treated affliction not as meaningless pain but as a doorway into God’s presence. The political revolution she once craved now seemed too small; what was needed was a revolution in the soul, wrought by grace.

In wartime exile, she wove her two great loves — justice and God — into one vocation. She volunteered for perilous missions, filled her notebooks with blazing insight, and restricted her food to the rations of those under occupation. The discipline consumed her; she died in 1943 at just 34. Even then, each reversal in her short life was, for her, an ascent — driven by an unyielding hunger for absolute truth and goodness, crowned at last in God.

Why great minds change their own


Philosophy often glorifies the builders of grand systems — the bold architects of thought who draw sweeping maps of reality. Yet no less worthy are the moments of self-correction, when a thinker dares to undo what once seemed certain. Conviction is not stubbornness. The greater strength lies in saying: “I have learned something that overturns my earlier belief, and I choose truth over pride.” Such reversals do not diminish a philosopher; they make them greater. We remember Wittgenstein not only for the Tractatus or the Investigations, but for the story of a mind unafraid to dismantle its own certainties.

These examples remind us that true critical thinking begins at home. To interrogate our own assumptions, to stay open when experience, dialogue, or suffering presses against us — that is philosophy at its deepest. Thought is not a finished edifice; it is a living journey. The greatest minds advanced by transcending themselves, sometimes carrying within them the drama of two philosophers at war.

In our polarized age, where doubt is mocked as weakness, their lesson is urgent. Can we welcome change in our own thinking — even delight in it? To do so is to join philosophy’s oldest vow: a fearless love of truth.

17 sept 2025

Science’s answer to the ultimate question: Where do we come from?

Questions about our origins, biologically, chemically, and cosmically, are the most profound ones we can ask. Here are today’s best answers.





Behind the dome of a series of European Southern Observatory telescopes, the Milky Way towers in the southern skies, flanked by the Large and Small Magellanic Clouds, at right. Although there are several thousand stars and the plane of the Milky Way all visible to human eyes, there are only four galaxies beyond our own that the typical unaided human eye can detect. We did not know they were located outside of the Milky Way until the 1920s: after Einstein's general relativity had already superseded Newtonian gravity. Today, this view helps us appreciate the awe and wonder that the Universe, and the cosmic story, holds for each of us. Credit: ESO/Z. Bardon (www.bardon.cz)/ProjectSoft (www.projectsoft.cz)

In all the world, and perhaps in all the Universe, there’s no greater question one can ask than the question of one’s own origins. For us, as human beings, this comes up often in our early childhood: we see, touch, and experience the world around us, and wonder where it all comes from. We look at ourselves and those around us, and wonder about our own origins. Even when we look to the heavens, and take in the spectacular sights of the night sky — the Moon, the planets, the stars, the glorious plane of the Milky Way, plus deep-sky objects — we’re filled with a sense of awe, wondering where the lights, and perhaps even the vast, empty darkness that separates them, all came from.

For millennia, we had only stories to be our guide: mythologies and untested, unsubstantiated ideas that sprung forth from human imagination. However, the enterprise of science has, for the first time in the history of our species, brought to us compelling, fact-based answers to many of these questions that enable us to make sense not just of nature, but of the story for how we came to be. Biologically, chemically, and physically, advances in the 19th, 20th, and now 21st centuries have enabled us to weave together a rich tapestry that finally answers the question so many of us have wondered for so long: “Where do we come from?”

Here’s where we are today, right up to the frontiers of what’s currently known.


The evolution of modern humans can be mapped out, along with the history of both our extant and now-extinct cousins, thanks to an enormous wealth of evidence found worldwide in the fossil record. Various examples include Homo erectus (which arose 1.9 million years ago and only died out ~140,000 years ago), Homo habilis (the first member of the genus Homo), and the Neanderthal (which arose later than, and likely independent of, modern humans). Credit: S. V. Medaris / UW-Madison

Biologically, we are the descendants of a continuous, unbroken chain of organisms that go back approximately four billion years.


You are the child of your parents: a genetic mother and father, each of whom contributed 50% of your genetic material. That genetic material contains an enormous amount of information within it, telling your body what proteins and enzymes to produce, how to configure them together, and where and when to activate a variety of responses. Your genetics explains nearly everything about your body, from your eye color to the types of red blood cells you produce to whether you have a deviated septum in your nose or not. Your mother and father, in turn, are descended from their genetic parents — your grandparents — who were in turn descended from your great-grandparents, and so on.

It turns out that as we go back, and back, and back still further, we find that organisms change over very long periods of time, evolving in the process. This evolution is driven by a combination of random mutations and natural selection, where the organisms that are most fit for survival, and most adaptable to the changes that occur in their conditions and environment, are the ones who aren’t selected against, and whose lineages continue. We can extrapolate this back, and back, and back, to when human ancestors were:
  • other members of the genus Homo,
  • mere hominids that predate the emergence of our genus,
  • primates that predate the evolution of hominids,
  • mammals that predate any primate: monkey or ape,
  • going all the way back to single-celled asexually reproducing organisms that existed billions of years ago.

This tree of life illustrates the evolution and development of the various organisms on Earth. Although we all emerged from a common ancestor more than 2 billion years ago, the diverse forms of life emerged from a chaotic process that would not be exactly repeated even if we rewound and re-ran the clock trillions of times. As first realized by Darwin, many hundreds of millions, if not billions, of years were required to explain the diversity of life forms on Earth. Credit: Leonard Eisenberg/evogeneao

The oldest, most long-ago evidence we have for life on Earth goes back at least 3.8 billion years: to the date at which the oldest sedimentary rocks still at least partially survive. Earth may have been inhabited even further back, as circumstantial evidence (based on carbon isotope ratios from zircon deposits in even older rocks) suggests that Earth could have been teeming with life as early as 4.4 billion years ago.

But at some point, back in the environment of our newly formed planet, we weren’t teeming with life at all. At some point, a living organism emerged on Earth for the first time. It’s possible that an outside-the-box idea, panspermia, is correct, and that the life that exists here on Earth was brought here, cosmically, from some elsewhere in space where life arose naturally from non-life.

Nevertheless, at some point in cosmic history, life did emerge from non-life. It is presently unknown exactly how that happened, and what came first:
  • the structure of the cell, separating a potential organism’s insides from the outside environment,
  • a string of nucleic acids that encoded information, enabling reproduction,
  • or a metabolism-first scenario, where a protein or enzyme that could extract energy from its environment formed first, and then reproduction and cellularity came afterward.
Although we aren’t certain of the pathway that it took, life did emerge from raw, non-living ingredients in the distant past.

If life began with a random peptide that could metabolize nutrients/energy from its environment, replication could then ensue from peptide-nucleic acid coevolution. Here, DNA-peptide coevolution is illustrated, but it could work with RNA or even PNA as the nucleic acid instead. Asserting that a “divine spark” is needed for life to arise is a classic “God-of-the-gaps” argument, but asserting that we know exactly how life arose from non-life is also a fallacy. These conditions, including rocky planets with these molecules present on their surfaces, likely existed within the first 1-2 billion years of the Big Bang. Credit: A. Chotera et al., Chemistry Europe, 2018

Therefore, chemically, at some point in the past, whether on Earth or elsewhere, a metabolism-having, replicating organism emerged, creating an origin point for life.


However, Earth itself, as well as the rest of our Solar System, needed to be brought into existence in order for there to be life on Earth at all. So where did the Earth, the Sun, and the rest of the Solar System come from? To answer this question, we can look to two different aspects of nature itself:

We can look to the various radioactive isotopes (and their ratios) of elements and use them to determine the age of the Earth, the Sun, and the various primordial (asteroid and Kuiper belt) bodies in our Solar System, determining when the Solar System formed.

And then we can look at star-formation (and stellar death) all across the galaxy and Universe, determining how stars are born, live, and die, and then use that information to trace back how our Sun and Solar System came into existence.

Here in the 21st century, we’ve done both of those things quite robustly. The Solar System is about 4.56 billion years old, with the Earth being slightly younger and the Moon being about 50 million years younger than Earth. We formed from a molecular cloud of gas that contracted and formed stars, with the planets (including primordial planets that may have since been ejected or destroyed) emerging from a protoplanetary disk that surrounded our young proto-Sun. Now, more than four and a half billion years later, only the survivors — including us — remain.


Although we now believe we understand how the Sun and our Solar System formed, this early view of our past, protoplanetary stage is an illustration only. While many protoplanets existed in the early stages of our system’s formation long ago, today only eight planets survive. Most of them possess moons, and there are also small rocky, metallic, and icy bodies distributed across various belts and clouds in the Solar System as well. Credit: JHUAPL/SwRI

Formed the same way that all stellar and planetary systems form, our own Solar System formed from the contraction of a molecular cloud that triggered new star formation, giving rise to the Earth, the Sun, and more.


Once Earth was created, life emerged on it shortly thereafter. Whether it was rooted in deoxyribonucleic acid (DNA), ribonucleic acid (RNA), or a peptide-based nucleic acid (PNA), at some point in the past, a molecule formed that encoded the production of a protein or enzyme that could metabolize energy, and that was capable of replicating and reproducing itself: a vital step toward modern, living organisms. But in order for those molecules to form, precursor molecules needed to exist: things like amino acids, sugars, phosphorus-based groups, and so on. These, in turn, required a slate of raw atomic ingredients, including:
  • hydrogen,
  • carbon,
  • nitrogen,
  • oxygen,
  • phosphorus,
  • sulfur,
  • calcium,
  • sodium,
  • potassium,
  • magnesium,
  • chlorine,
and much more.

But with the exception of hydrogen, the most abundant element in the Universe, none of these elements were present in the earliest stages of cosmic history. The Universe must have, somehow, created these elements, as these atomic building blocks are absolutely necessary to the formation not just of living organisms, but of rocky planets like Earth themselves.

Fortunately, we do have a cosmic story that accounts for the emergence of these elements: from the life cycles of stars. Through stellar deaths, including from Sun-like stars that die in planetary nebulae, from very massive stars that die in core-collapse supernovae, from neutron stars that collide in kilonovae, and from white dwarf stars that explode in type Ia supernovae, the heavy elements of the Universe are created and returned to the interstellar medium, where they can participate in new episodes of star formation.


The most current, up-to-date image showing the primary origin of each of the elements that occur naturally on the periodic table. Neutron star mergers, white dwarf collisions, and core-collapse supernovae may allow us to climb even higher than this table shows. The Big Bang gives us almost all of the hydrogen and helium in the Universe, and almost none of everything else combined. Most elements, in some form or another, are forged in stars. Credit: Cmglee/Wikimedia Commons

This teaches us that our Sun, Earth, and Solar System were born from the ashes of pre-existing stars and stellar corpses that lived, died, and returned their processed interiors to the interstellar medium.


So that’s where humans come from, where life comes from, where the Solar System comes from, and where heavy elements come from. You need stars to make the raw ingredients to have planets; you need a late-forming star with enough heavy elements in it to make a rocky planet with the right ingredients for life; you need the right chemical reactions to kick off to create a living creature from non-life; then you need the right conditions for life to survive and thrive over geological timescales, under the pressures of natural selection, to create the diversity of life we find on Earth today, including human beings.

But in order for this to occur, you need to make stars for the very first time, and that requires a set of ingredients and conditions, too. You need neutral atoms, and in particular large numbers of hydrogen atoms, and that’s ok: they were formed in the early stages of the hot Big Bang. But you also need a non-uniform Universe: one with overdense regions that would gravitationally attract more and more matter into them, until enough matter had gathered that stars would form for the very first time. Under the laws of general relativity, based on the initial fluctuations we see in the cosmic microwave background, that’s precisely what our Universe gives us: a set of conditions and ingredients that enable the formation of stars, for the first time, from a pristine collection of neutral atoms.


The very first stars to form in the Universe were different than the stars today: metal-free, extremely massive, and nearly all destined for a supernova surrounded by a cocoon of gas. There was a time, prior to the formation of stars where only clumps of matter, unable to cool and collapse, remained in large, diffuse clouds. It is possible that clouds that grow slowly enough may even persist until very late cosmic times. Credit: NAOJ

Those very first stars formed early on, back before the Universe was even 2% of its present age: the furthest back that we’ve ever observed a star, galaxy, or quasar with the record-setting James Webb Space Telescope (JWST). They likely formed simply by gravitational contraction of a gas cloud, and were hindered by a lack of heavy elements to efficiently cool those clouds as they contracted, requiring very large masses to gather to trigger gravitational collapse. As a result, these first stars, which still have yet to be spotted, were likely very high in mass, and very short-lived as a result.

Although we have yet to find the first stars, representing a “missing link” in cosmic evolution, scientists can be certain they existed: in between the massive galaxies spotted by JWST and the neutral atoms formed way back at the epoch of the cosmic microwave background.


Nevertheless, we continue in our quest for the ultimate cosmic origin. Those stars must have formed from neutral atoms, and in the framework of the Big Bang — and validated by observations for 60 years, and counting — neutral atoms can only form when the Universe cools from a hot, dense, plasma state (where all of the atomic components are ionized) to a less hot, less dense state where neutral atoms are stable. In the aftermath of such a transition, a background of low-energy remnant radiation would be emitted omnidirectionally, persisting even until the present day. It was the detection of that remnant primeval radiation, now known as the cosmic microwave background (CMB), that sealed the deal for the Big Bang.

At early times (left), photons scatter off of electrons and are high-enough in energy to knock any atoms back into an ionized state. Once the Universe cools enough, and is devoid of such high-energy photons (right), they cannot interact with the neutral atoms, and instead simply free-stream, since they have the wrong wavelength to excite these atoms to a higher energy level. Credit: E. Siegel/Beyond the Galaxy

In order to create stars, the Universe needed to create neutral atoms, which were produced about 380,000 years after the onset of the hot Big Bang.

Of course, a hot, dense plasma wasn’t the beginning of things either. If you continue to extrapolate backward in time — toward a hotter, denser, more uniform state — you’d come to a time when it was too hot and dense to form atomic nuclei; you would have only had bare protons and neutrons. At still higher temperatures and earlier times, the energy of any radiation present (as well as neutrinos and antineutrinos) would have been sufficient to cause protons and neutrons to interconvert, leading to a 50/50 split between protons and neutrons.

Therefore, as the Universe expands and cools from those early conditions, and these nuclear reactions cease to occur, we should wind up with a tilted abundance of protons versus neutrons: one that favors protons. Then, as the Universe cools further, nuclear fusion reactions can proceed, first forming deuterium out of protons and neutrons and then synthesizing heavier elements, like helium, and then (if there’s enough energy) lithium and heavier elements, after that. It’s by:
  • measuring the baryon-to-photon ratio of the Universe,
  • predicting, through nuclear physics, the abundance of the light elements,
  • and then examining the Universe itself to learn how abundant the light elements actually are,
that we learn how Big Bang nucleosynthesis, or the science of making elements even before the first stars formed, proceeded.


This plot shows the abundance of the light elements over time, as the Universe expands and cools during the various phases of Big Bang Nucleosynthesis. By the time the first stars form, the initial ratios of hydrogen, deuterium, helium-3, helium-4, and lithium-7 are all fixed by these early nuclear processes. Credit: M. Pospelov & J. Pradler, Annual Review of Nuclear and Particle Science, 2010

And indeed, to form the Universe we see, the light elements were forged together through nuclear reactions in the early stages — the first few minutes — of the hot Big Bang.


Finally, we go back earlier and earlier, to hotter and even denser conditions. At some point, protons and neutrons cease to be meaningful entities, as the Universe takes on the conditions of a quark-gluon plasma. At high enough energies, matter-antimatter pairs spontaneously get created from photons and other particles colliding: a consequence of Einstein’s mass-energy equivalence, or E = mc². All of the particles and antiparticles of the Standard Model, even the unstable ones, were created in great abundance under these early conditions. And at early enough times, the electromagnetic force and the weak nuclear force were unified into one: the electroweak force.

And yet, despite all we know, some additional gaps and mysteries still remain.

At some point, even though we don’t know how, more matter was created than antimatter, leading to our matter-dominated Universe today.
Before that, were there additional unifications that occurred? Was gravity, at some point, unified with the forces of the Standard Model, and was there a Theory of Everything that described reality?

We don’t know. But we do know that the hot Big Bang, even at its hottest, wasn’t the very beginning of everything. Instead, the conditions that the Big Bang was born with:
  • perfect spatial flatness,
  • a lack of leftover, high-energy relics,
  • with a maximum temperature well below that of the Planck scale,
  • with the same temperatures and densities everywhere and in all directions,
  • with tiny, 1-part-in-30,000 overdensities and underdensities superimposed atop them on all scales,
  • including on super-horizon scales,
are exactly the conditions that a phase of cosmic inflation, predating and setting up the Big Bang, would have predicted.


From a region of space as small as can be imagined (all the way down to the Planck scale), cosmological inflation causes space to expand exponentially: relentlessly doubling and doubling again with each tiny fraction-of-a-second that elapses. Although this empties the Universe and stretches it flat, it also contains quantum fluctuations superimposed atop it: fluctuations that will later provide the seeds for cosmic structure within our own Universe. What happened before the final ~10^-32 seconds of inflation, including the question of whether inflation arose from a singular state before it, not only isn’t known, but may be fundamentally unknowable. Credit: Big Think / Ben Gibson

Before the Big Bang, the Universe wasn’t dominated by matter or radiation, but by energy inherent to space itself, in a phase known as cosmic inflation.

And this, at last, is where our knowledge comes to an end: not with a gap, but rather with a cliff of ignorance. Inflation, by its very nature, is a period where there was an incredible amount of energy locked up in the fabric of empty space itself. In this state, space expands at a relentless, exponential pace, doubling in size in all three dimensions in just a tiny fraction of a second, and then doubling again and again and again with each subsequent fraction of a second that elapses.

However, because our observable Universe is of a finite size, this means that only the final small fraction-of-a-second of inflation leaves any imprint on our Universe; it’s from that brief epoch that we’ve been able to determine that inflation occurred at all. For everything that came before it, including:
  • answers to the question of how long inflation endured,
  • answers to the question of whether inflation was eternal or whether it started from some pre-inflationary conditions,
  • what those pre-inflationary conditions were,
  • and whether there was an ultimate beginning to, say, what we think of as the fundamental entities of space, time, and the laws of physics that govern them,
we simply have no information, only speculations. Science, remember, doesn’t give us the ultimate answers to our inquiries, it simply gives us the best approximation of reality, given our current state of knowledge, that is consistent with all the evidence we’ve collected to this point. We’ve come incredibly far in our quest to make sense of the Universe, and while there are still open questions that science is pursuing, the broad strokes — plus a great many details — of “where we come from” are finally known.


Ethan Siegel, Ph.D., is an award-winning theoretical astrophysicist who's been writing Starts With a Bang since 2008. You can follow him on Twitter @StartsWithABang.

1 sept 2025

La educación que no funciona: "Hay alumnos con 12 años que ni saben escribir su nombre"

Padres, profesores y pedagogos están preocupados por una "crisis educativa" con los peores resultados desde que hay mediciones internacionales.

Autora: Olga R. Sanmartín. Actualizado Viernes, 29 agosto 2025 - 22:56

Fuente: https://www.elmundo.es/espana/2025/08/29/688359b0fdddff18318b45cb.html 

Tania Alonso, David Reyero y Alberto Sánchez-Rojo, en un aula de la Facultad de Educación de la Complutense. ELENA IRIBAS

Mariela Pica, española nacida en Argentina residente desde hace 18 años en Barcelona, cambió a sus tres hijos de colegio el año pasado porque veía que los niños hacían muy poca cosa en clase. Los había matriculado muy ilusionada en un centro público sin libros de texto ni cuadernos, sin deberes obligatorios ni pelota en los recreos. Una escuela donde el comedor había sido rebautizado como «espacio mediodía», la asignatura de Educación Física se llamaba «Cuerpo y Movimiento» y la expresión «padres» había sido sustituida por «familias».

«Yo soy un poco jipi, atea y anarquista. Pensaba que este colegio era guay, pero resultó ser una escuela de cara a las redes sociales. La realidad era que mi hijo mayor no aprendía como es debido. Con ocho años leía con mucha dificultad, escribía números al revés y no sabía restar. Nunca le corregían por miedo al trauma: para evitar señalar sus faltas ortográficas, nos decían que tenía una 'ortografía natural'», relata esta licenciada en Relaciones Laborales que, en cuanto cambió a sus hijos a un centro religioso «tradicional» -con fichas y libros de texto-, comenzó a ver que los críos se ponían las pilas.

«En el anterior colegio no incidían en las tablas de multiplicar por esa idea de que para eso está la calculadora. Yo no digo que estén todo el día con la disciplina y el trabajo a cuestas, pero al final se necesita el cálculo para la vida diaria. No todo tiene que ser divertido ni fácil ni inmediato. Bajar el listón no beneficia a nadie. Avanzar en el conocimiento y conseguir algo que ha costado esfuerzo es lo que más placer da», defiende esta madre que ha escrito una carta a la Generalitat manifestando que se ha sentido «timada por el sistema y por la escuela» y reclamando que se modifique el modelo educativo que desde hace una década impera en Cataluña y que la Lomloe de 2020 ha dado carta de naturaleza.

Mariela Pica lo ha hablado con otras madres y todas coinciden: sus hijos no están aprendiendo y, además, se ven obligadas a «usar los fines de semana para enseñar las cosas que deberían impartirse en la escuela». «El descontento entre las familias es generalizado, nuestros hijos no aprenden, no entienden lo que leen y ha bajado el nivel», lamenta.

Que el sistema educativo no funciona bien lo ponen de manifiesto las últimas pruebas PISA, donde los alumnos españoles de 15 años han logrado los peores resultados de la historia en Matemáticas y Ciencias. Estos críos son menos competentes que los adolescentes de la década anterior: llevan un retraso académico del equivalente a medio curso escolar. Igualmente mediocres son los resultados de los alumnos de 10 años en la prueba PIRLS de Lectura y en la TIMSS de Matemáticas, donde Cataluña queda especialmente mal, pues ha caído hasta situarse a la altura de Canarias, muy por debajo de lo que le corresponde por su nivel socioeconómico.


Los resultados de Cataluña en sus propias pruebas autonómicas también muestran que el nivel en Matemáticas de los niños de 12 años es el más bajo desde que hay registros. Tanto es así que la Generalitat ha pedido ayuda a la OCDE, a la que va a pagar 1,5 millones de euros para mejorar su sistema educativo. Hasta la Fundación Bofill, abanderada de los modelos pedagógicos «innovadores» como el de la escuela de la que huyó Mariela Pica, ha reconocido que «hay una falla sistémica de los resultados de aprendizaje», dando a entender que se trata de un problema estructural que afecta a todo el sistema, desde los primeros años hasta la Selectividad, donde las notas excelentes se han reducido este año a casi la mitad.

Familias y profesores denuncian que Cataluña, pionera de la LOMLOE y la región más partidaria de abrazar las nuevas pedagogías y llenar de pantallas las aulas, está sumida en «una crisis educativa sin precedentes» que también se está manifestando en otras regiones, como el País Vasco, donde los malos resultados han hecho disparar las primeras alarmas.

¿Qué está ocurriendo?

«En Cataluña, sobre todo en Bachillerato, cada vez hay más horas para asignaturas optativas y menos para materias como Matemáticas, Física o Biología, lo que supone un grave problema organizativo y para fijar estándares de calidad. Además, cada vez se hacen más actividades extraordinarias que afectan al número de horas de clase, en un momento crítico en lo que respecta al conocimiento, como muestran las pruebas internacionales. La LOMLOE permite que las materias se fusionen y se trabaje por ámbitos, donde se juntan, por ejemplo, Inglés e Historia, algo que es un cacao para el alumnado. Y luego tienen que elaborar un supuesto proyecto de investigación científica, pero no tienen los conocimientos previos para hacerlo», enumera Susanna Roch, profesora de Lengua Catalana en un instituto público de la provincia de Girona.

Cataluña, entusiasta del llamado «modelo competencial» que impulsa la Lomloe, lo ha llevado al extremo «priorizando las actitudes frente al aprendizaje de conocimientos», dice. Eso ha tenido como consecuencia que «los alumnos saben perfectamente que ser ecologista, según los parámetros actuales, es llevar un coche eléctrico, pero no entienden lo que es una batería».

El sistema de evaluación, en su opinión, es «más subjetivo» ahora y «parece un proyecto de ingeniería social» porque «el estudiante sube la nota si entiende el feminismo como se supone que tiene que entenderse». Al mismo tiempo, «hay tanta burocracia para justificar cada mala calificación que es muy difícil suspender».

El plan para que cada alumno tenga su propio ordenador portátil, que al principio parecía una buena idea porque durante el Covid sirvió, no ha hecho sino empeorar la situación, apunta la profesora, porque «estas herramientas funcionan bien, por ejemplo, para recrear una célula en tres dimensiones, pero distraen y hacen que los estudiantes desconecten».

«Además, existe un grave problema de comprensión lectora, muchos alumnos no entienden lo que leen. Me llegan estudiantes que en 1º de la ESO son analfabetos y, a sus 12 años, no saben escribir su propio nombre. Me he encontrado con críos en Bachillerato que no sabían reconocer los verbos o que no entendían lo que significaban las palabras prosa o proletariado. Les ofrecen Biología en inglés o clases para aprender a aprender con pensamiento crítico, pero no se les da un curso intensivo de lectoescritura que pueda compensar estos déficits», añade. Sostiene que la LOMLOE, con su laxitud al dejar pasar de curso y graduarse con suspensos, «no ayuda a que el alumno se responsabilice».

"Las familias delegan en las pantallas"

La mayoría de estos cambios no se circunscriben sólo a Cataluña. Es un problema educativo nacional que, además, es síntoma de un fenómeno mayor. La escuela no cumple su función ante la inacción de los políticos -la ministra de Educación, Pilar Alegría, está desaparecida en los asuntos de su área, mientras que las CCAA del PP han fracasado este curso en su intento de implantar una Selectividad común- y unos padres que no están a lo que tienen que estar.

«Las familias están desbordadas porque no tienen condiciones laborales ni tiempo para poder educar bien a sus hijos y delegan sus funciones en las pantallas y en la escuela, que cada vez tiene una tarea más asistencial, con profesores que hacen de psicólogos, de monitores de tiempo libre o de coaches. No es popular reclamar exigencia y los padres no corrigen a sus hijos por la idea mal entendida de que no hay que forzarles. Cada vez veo a más niños que no dicen 'hola' y a sus padres no se les ocurre decirles: '¡Oye, saluda!'. Ponemos mucho el foco en cómo se sienten, pero no en sus responsabilidades», describe Roch.

En este contexto, tanto en la escuela como en los hogares «se elude aplicar con claridad y consistencia las normas, se toleran comportamientos inaceptables y se generaliza que los actos no tengan consecuencias».

"Universitarios que no toman apuntes"

La tendencia se observa con los niños en los colegios, pero se ha extendido a los adolescentes de los institutos y de ahí a los jóvenes adultos en la universidad, como prueba ese hilo en X que se viralizó en junio en el que Manuel Hidalgo, profesor de Economía de la Universidad Pablo de Olavide de Sevilla, denunciaba una «apatía general» por parte de buena parte de sus estudiantes: «Acaba el curso y las sensaciones son muy malas, en muy poco tiempo hemos visto (lo acabamos de comentar en el grupo de profes de la asignatura) una caída en picado de la actitud del alumno medio».

Este docente contaba que el 40% de los alumnos no va a clase y, de los que van, «muchos no toman apuntes». «En el examen no comprenden algunas palabras o te hacen preguntas que, literalmente, me sorprenderían de niños de 10 años», señalaba.

Su relato coincide con lo que ha ocurrido este año en la Selectividad de Madrid, donde la caída en picado en los resultados del examen de Matemáticas para los alumnos del Bachillerato de Ciencias Sociales no tuvo que ver con que la prueba fuera más difícil, sino con que «el enunciado era más largo de lo habitual y muchos no lo entendieron», según una de las responsables de este examen.

El pedagogo David Reyero y el filósofo Alberto Sánchez-Rojo, las voces principales de La educación en la era digital (Encuentro), corroboran la existencia de ese «problema de comprensión lectora» y apuntan también a «un cambio estructural en Magisterio tanto en los planes de estudios, donde hay una psicologización de la educación, como en la naturaleza de la propia escuela, que ha adquirido en los últimos tiempos una finalidad de cuidado y socialización, restando valor a la transmisión del conocimiento». Coinciden con la pedagoga Tania Alonso, compañera de ambos en el Departamento de Estudios Educativos de la Facultad de Educación de la Universidad Complutense, en que «hay que conseguir que los alumnos lean más y aprendan más».

«Se ha cambiado el objetivo educativo del florecimiento intelectual por el del bienestar emocional», advierten, en un momento en que «los adultos quieren evitarles a sus hijos el sufrimiento a toda costa». «Uno se siente bien cuando hace algo que le saca de sí mismo. La escuela, en este sentido, sitúa a los estudiantes frente al mundo, obligándoles a dejar de mirarse el ombligo», reflexiona Reyero. «La escuela es ese lugar donde surge la oportunidad de aprender cosas que fuera de ella no se van a poder aprender», añade Sánchez-Rojo.

Los tres señalan un cambio en el perfil de los maestros, ex alumnos de la LOGSE a los que, en general, «no les gusta leer» aunque la mayoría proceda del Bachillerato de letras. Tampoco les gustan las Matemáticas, según un estudio de la Complutense que revela que estos jóvenes entran a Magisterio con notas bastante bajas. «Educación es una carrera comodín, para gente que no sabe muy bien qué hacer o que no ha entrado en lo que quería. Algunos no tienen amor por el conocimiento y, como durante la carrera se les insiste en que lo importante es saber cómo dar clase y no conocer bien la disciplina, terminan ejerciendo una función más ligada al cuidado».

Cada vez más alumnos se quedan atrás

Los resultados son peores que hace una década, según refleja PISA, que también muestra que el empeoramiento se ha producido en casi todos los países de la OCDE menos en los asiáticos, que han mejorado a base de trabajar duro y esforzarse. Singapur destaca porque tiene casi un 40% de alumnos brillantes en Matemáticas frente a un 8% de estudiantes mediocres. Cifras parecidas se ven en Japón, Corea del Sur o Estonia. En España es justo al revés: cada vez tenemos menos estudiantes sobresalientes (6%) y cada vez hay una mayor proporción de alumnos que se quedan atrás (27%, en la última edición).

27 may 2025

Daniel Dennett’s 4 rules for a good debate

What’s the point in fighting a made up monster?

By Jonny Thomson

 


A straw man is when you simplify or exaggerate somebody’s argument to make it easier to target, an opponent you can blow down with adversarial flair. For example, if an atheist says that Christianity is just worshipping “some bearded man in the sky,” well, that’s a straw man, because barely any Christian would accept that representation of their religion. Of course, if a Christian says that an atheist does not believe in anything or that life has no meaning, that is also very likely a straw man.

The problem with the straw man argument is that not only does it not actually address someone’s points, but it poisons the entire debate. It’s a bad-faith argument that sees conversation as a brawl and “truth” as only one weapon in the war to win at all costs. But there is a better way.

The steel man

The opposite of a straw man is a steel man. This is where you not only represent someone’s arguments faithfully and with respect, but you do so in the best possible light. You spend a great deal of attention clarifying and double-checking what your debating partner actually means.

In my experience, if you take the time to genuinely inquire about what someone believes, you will find far greater nuance — and often far greater agreement — than you thought at the start. For example, only the most dastardly and venal of politicians are doing it entirely for themselves. Most politicians want to make society and the world a better place. It’s just that left- and right-wing arguments differ about how to achieve that.

In the 2013 book Intuition Pumps and Other Tools for Thinking, philosopher Daniel Dennett described something like the steel man in his four rules for any good philosophical debate:

  • First, and most important, is that you should “attempt to express your target’s position so clearly, vividly, and fairly that your target says things ‘I wish I thought of putting it that way.’”
  • Second, you should list all of the ways in which you and your partner agree on things.
  • Third, you should recognize the ways in which your partner has taught you something new.
  • Fourth, only after all of this can you go on to try and rebut or criticize their position.

This doesn’t mean that you have to agree or compromise on your position, though. After all, some people hold repulsive and horrendous beliefs. It just means that you should fight what is there to fight, and not an imaginary shadow or straw man.

The Greek steel man

If you read Plato’s dialogues, you’ll see that Socrates often presents his opponents with a steel man argument. A huge part of the dialogue involves Socrates clarifying, laying out, and even strengthening the other person’s position.

For instance, in the dialogue Gorgias, Socrates pauses and reframes Callicles’ position about strength and domination, often making it sound more coherent than Callicles himself. In Republic, when Thrasymachus argues that justice is nothing but the interest of the stronger, Socrates doesn’t caricature or simplify the idea. Instead, he builds it up until it sounds almost plausible, before patiently showing its cracks.

Debates today are often about trying to grab the six o’clock headlines or ride the viral wave with an entertaining “gotcha” moment. But the Socratic method was about understanding fully before disagreeing deeply. That’s the spirit of the steel man.

Everyday debates

Of course, Plato’s dialogues are not podcast transcripts. They are the fictionalized accounts of Plato’s version of Socrates debating Plato’s version of his opponents. But the fact that Plato was so willing to present rival opinions in such a strong and positive light (before pulling them apart) reveals just how far we have drifted in what we call a good discussion.

In many ways, the problem of straw manning and ridiculing invented beliefs will not go away. Point scoring and cheering to the crowd were present in ancient Greece, and they’re here in modern, prime-time TV debates. But the steel man is something we can try to bring back to our everyday conversations with the people we interact with weekly.

What Plato and Dennett both knew is that if you are to actually grow in your beliefs, you have to see debate as a constructive act, and with good intentions. Debates are not adversarial opportunities to make your opponent look like an idiot — they are opportunities to become better and grow.


From: https://bigthinkmedia.substack.com/p/daniel-dennetts-4-rules-for-a-good 

20 ene 2025

Déjame que te cuente...


Me dicen en mi Departamento que, en el próximo Claustro, se votará si permitimos en nuestro centro llevar el velo islámico y que no habrá debate para no enzarzarnos en una discusión larga. Y parece que se hará en secreto. Permíteme robarte unos minutos de tu tiempo porque quiero contarte un cuento:

 

Me contaron en la escuela que hubo un tiempo en el que mataban a quien pensaba diferente. Me contaron en la escuela que se expulsó de nuestra tierra a los que tenían otra religión que no era la verdadera. Me contaron en la escuela que algunos pensaron que era buena idea exterminar a grupos enteros porque su raza no era la nuestra. Me contaron en la escuela que, durante muchos siglos, a colectivos diversos se les esclavizó o se les miró con recelo por el color de su piel, porque vestían distinto, porque comían distinto o tenían costumbres distintas. Me contaron en la escuela que, hasta hace muy poco tiempo, se privó de derechos a la mitad de la población del mundo porque eran mujeres. Me contaron en la escuela que hubo un tiempo en el que el odio partió familias y estuvimos, aquí en España, tres años en una guerra. Me contaron en la escuela que, después, hubo una dictadura y que solo valía lo que decía el señor que ganó la guerra. Me contaron en la escuela que, en el norte, algunos se dedicaron a matar de un tiro en la nuca y coches bomba a quienes no pensaban como ellos. Me contaron en la escuela que, en el sur, hicimos una valla muy alta y con pinchos para que no pasen a este lado esos a los que queremos fuera.

Me contaron también en la escuela que, por fin, llegó un tiempo en el que aprendimos a convivir respetando las diferencias; que existe una cosa llamada derechos humanos que son para todos los humanos, sin importar de dónde vengas, dónde hayas nacido, quiénes sean tus padres, cuánto dinero hay en tu cuenta, cómo vistes, qué comes o cómo llamas al Dios al que le rezas. Me contaron en la escuela que todos somos iguales y libres; y que hablando se entiende la gente; y que no hay que imponer a nadie lo que uno piensa; y que no importa si otros pintan su vida con una paleta de colores que tú detestas. Me contaron que estamos todos aquí para hacernos la vida un poquito más fácil; y que en la variedad está el gusto; y que en caso de duda es mejor seguir pensando; y que, si no se limpian las heridas, al final se infectan y puede entrar gangrena. También me contaron que la arruga es bella y que era bueno escuchar a los viejos; y que es muy hermoso vivir en entornos donde todos cabemos; y que, para entender lo que no entiendo, puedo levantar la mano y preguntar a quienes sí saben porque lo están viviendo; y que nadie lo sabe todo, pero que entre todos podemos estar más cerca de saberlo; y que se pueden sumar los cachitos de verdad para hacer una más grande, aunque al principio dé miedo; y que había que respetar la libertad de pensamiento

Y también me contaron que contaban conmigo para aportar algo en este reto. Y yo creí a quienes me lo contaron… Y me hice profesor, como ellos, para seguir escribiendo este cuento.

Y soñé que las últimas páginas de nuestro cuento nos llevaban muy lejos: Soñé que, en clase, además de “nuestras cosas”, les enseñábamos a los alumnos, con nuestro ejemplo, a ser mejores personas, a pensar diferente, a ser creativos, a cuestionar las creencias, a no aceptar sin más lo políticamente correcto, a ser críticos con las imposiciones, a respetar a sus compañeros, a disfrutar con las discrepancias, a poner en duda sus certezas, a no tener miedo, a confiar en sí mismos, a escuchar a los que piensan distinto, a comprender que cada uno tiene su historia, a tratar de ayudar siempre o, al menos, a no molestar a quien lo está haciendo, a pedir ayuda, a implicarse, a defender a los que pasan por malos momentos, a superarse a sí mismos, a cuidar a los más frágiles, a no centrarse tanto en lo que nos separa sino en lo que nos une, a tener un corazón inmenso y que lo hagan crecer al ritmo que les metemos cosas en sus cabezas…, y que las cubran como quieran, que se las rapen o que se las pinten del color que les venga, porque no importa lo que está por fuera sino lo que llevan dentro.

Y hoy me he despertado y, mientras mi sueño se enfría lentamente como el café mañanero, veo que hay una página importante que puede cambiar el final de nuestro cuento. El miércoles votaremos en el Claustro lo que llevamos tiempo debatiendo: si las alumnas y profesoras que así lo quieran podrán seguir llevando el velo. Nosotros no somos quién para decidir eso.

He leído las opiniones de todos y me duele en el alma el daño que entre todos nos estamos haciendo. Somos profesores, no jueces. Somos compañeros, no rivales. Al margen de nuestras razones, argumentos o quién suma más en cada lado…, quizá sea bueno que escuchemos en silencio atento a quienes, con su elección libre de vestir como quieren, ya nos llevan hablando desde hace tiempo. Tan equivocado es obligar a vestir a alguien como no elige como prohibírselo cuando ha tomado la decisión de hacerlo. A los extremistas de un lado los llamamos talibanes; a los del otro, no me gustaría que se asocie al nombre de ningún compañero, y tampoco al de mi Centro. Tomar decisiones así les corresponde a otros; a nosotros nos toca educar, cuidar, sembrar, ayudar a crecer, formar y enseñar a desplegar las velas. A veces uno sabe en qué lado estar simplemente mirando a los que están enfrente.

Podemos cargamos de argumentos que validen nuestra posición, cada cual la suya, pero también nos cargamos en el proceso a quienes ya no podrán seguir eligiendo algo tan sencillo como ponerse o quitarse un velo, un signo de identidad que nada tiene que ver con cuestiones educativas ni de lejos. Con prohibiciones que no tienen sentido pedagógico, todo se vuelve un poco más feo. No me gustaría que mis alumnos tengan que contar el cuento de que hubo un tiempo en el que sus profesores votaron eso. Que decidan los decididores, nosotros a educar, que es lo nuestro. Que impongan los impositores, nosotros a convencer, aunque lleve más tiempo. Que quiten libertades los quitadores, nosotros a aportar y a caminar hacia la auténtica libertad verdadera. Que obliguen los obligadores, nosotros, mientras tanto, a construir un entorno educativo libre, igualitario, estimulante, diverso y plural.

Algunos pensaréis que esto no va con vosotros y preferiréis no mojaros. Os comprendo porque es verdad. A ti no te afecta ahora, pero a algunas personas de tu entorno sí. Tú no quieres mojarte, pero a otros les salpica tu decisión… Y sienten frío… Y necesitan tu paraguas para protegerse de los salivazos… Tendrás que mojarte o dejar que otros mojen o mojar, tú eliges. Libertad, igualdad y fraternidad fue el lema de la revolución francesa. Mi deseo es que nuestro Centro siga siendo un espacio de libertad, de igualdad y de fraternidad… Y, aunque suene a tautología pedante, la libertad se consigue siendo libres, la igualdad cuando todos somos iguales, y la fraternidad cuando actuamos como si realmente nos creyéramos las otras dos.

Si te identificas con este cuento, cuento contigo para defender la libertad no solo aquí sino en todos los centros de La Rioja. Si aún estás en duda, reflexiona, pregunta, pero no calles ni aceptes sin más. Sapere aude! ¡Atrévete a pensar! Estamos escribiendo una página importante de nuestra pequeña historia. Y, por supuesto, si no estás de acuerdo y te apetece, podemos tomarnos un café y me cuentas tu cuento. Posiblemente no llegaremos a un acuerdo pero, al menos, nos habremos escuchado, que es una competencia fundamental que tenemos que aprender quienes estamos rodeados de alumnos y nos dedicamos a ellos.

Nuestras razones importan poco cuando lo que está en juego es hacerles peor la vida a otras personas, restringir libertades o imponer criterios. Somos profesores. No dejemos de serlo.

* * * * *

Con todo el cariño del mundo a todos los compañeros que, desde cualquier posición, estáis contribuyendo a que cada día que pasamos juntos en esta tarea educativa merezca ser vivido con la intensidad, creatividad, atrevimiento y valor que exige esta profesión tan bella. Alguien continuará nuestra historia y habrá merecido la pena el esfuerzo de escribir nuestro cuento. ¡Nosotros izamos velas, no arriamos velos!