A thread for links you think others should check out.
15 thoughts on “Hey! Check This Out! (2022)”
Sarah Zhang, one of several science writers at theAtlantic, has an interesting notion in this piece.
Zhang presents an interesting idea here: If certain diseases were all but eliminated due to improvements in better sanitation of water–if no modern society would tolerate unsanitary conditions with regard to water and refuse–if we applied the same thought to airbourne diseases, could we eliminate many of them?
Zhang starts by mentioning seasonal flu. With better ventilation and air filtering in our buildings, maybe the phenomena of seasonal flu would be a thing of the past? Could we have largely been spared of the havoc caused by COVID-19?
One catch is the cost of not only upgrading existing building, but buildings may have to use more energy to achieve this–and my understanding is that buildings already consume a lot of energy. However, think of a more deadly airbourne disease that may eventually come about. If better building ventilation could reduce or prevent such a disease from spreading wildly, the cost might be worth it.
If you’ve followed the news about the pandemic, you know that initially the scientific community didn’t think COVID-19 was an airbourne disease. Later they modified this stance.
The article seems to identify the heart of the error–an error that involves the cutoff point that determines whether a pathogen can infect someone through the air. For a long time, that cutoff point, used by the major health institutions like the World Health Organization (WHO) and CDC, was 5 microns (one micron = one-millionth of a meter). That is, any particle smaller than 5 microns could be airbourne, while anything larger would be considered a droplet (droplets fall quickly to the ground, and transmission can occur through touching surfaces those droplets landed on).
However, the article mentions one problem with this:
“The physics of it is all wrong,” Marr says. That much seemed obvious to her from everything she knew about how things move through air. Reality is far messier, with particles much larger than 5 microns staying afloat and behaving like aerosols, depending on heat, humidity, and airspeed.
Marr, refers to Linsey Marr, a scientist who joined other scientists to study this question. One big component of their study involves answering the reasons 5 microns was the cutoff point. I recommend reading the article to find out the answer, but for those not interested, I’ll post the explanation below.
(spoilers)
The answer is somewhat complicated, but I’ll try to simplify it. First, in the 1934(!), an engineer named William Firth Wells and Mildred Weeks Wells, a physician (and his wife) “analyzed air samples and plotted a curve showing how the opposing forces of gravity and evaporation acted on respiratory particles.” Through this analysis they discovered that particles bigger than 100 microns sank within seconds, while particles smaller than this stayed in the air.
This would suggest that the cutoff should be 100 microns, not 5. Why did the public health community choose 5 microns, instead of a 100?
There isn’t a definitive answer, but the scientists and engineers that worked with Marr provide a compelling hypothesis.
Next, two discoveries seem to matter. First, the mucus in human nostrils could effectively filter pathogenic particles larger than 5 microns–that is, smaller particles would not be filtered. Second, Wells also did some research on tuberculosis and discovered that tuberculosis particles that were 5 microns or less would lead to infections.
Another factor might also be significant. One of the more prominent epidemiologists, at a nascent CDC, seemed to view the notion of airbourne transmission “as retrograde, seeing in them a slide back toward an ancient, irrational terror of bad air—the “miasma theory” that had prevailed for centuries.” Instead, the public health seemed to emphasis personal hygiene like handwashing.
These factors may explain the reason the public health community focused on 5 microns, while forgetting about the earlier study that showed particles up to 100 microns could float around in the air. (Also, important: TB needed to go deep into the lungs to cause infection–the smaller and finer the particle of TB, the more chances of this happening. But other pathogens could do damage without going so deeply into the lungs.)
In a way the 5 micron cutoff makes sense, given that mucus works as a good filter for any particles larger than 5 microns. Because of this, if public health ignoring pathogens bigger than 5, but less than 100 microns is somewhat understandable. Nevertheless, if the article is correct, it’s not a small error, and it should be corrected.
The brain-as-a-computer metaphor has lead to a lot of problems. That’s the claim of science journalist Annie Murphy Paul in her discussion on the Ezra Klein Show. The problem, and disappointment, is that Paul (and Klein) don’t find a satisfying alternative. (She suggests the brain is a like a magpie, citing the magpie’s penchant for using all sort of things to build its nest.) Moreover, the critique of the computer metaphor (and also brain-as-a-muscle metaphor) doesn’t seem entirely valid and her claims and the totality of them seem muddled.
And yet I’m recommending this podcast. Why? Well, I do think she presents some interesting ideas, in spite of the complaints above. For example, she points out the brain seems to function better while outdoors and while the body moves–not just walking or running, but she claims that moving one’s hands while thinking (talking) can help the brain function better. This idea does seem compelling to me, as does the explanation–namely, that we evolved in the outdoors therefore the brain is best suited for operating in this environment, while a person is moving and observing. On the other hand, sitting down indoors for long periods, and focusing on written language is very unnatural, and the brain may have to expend more energy to deal with this situation. My experiences align with this as well. I do think walking does help me thinking (although this applies indoors as well).
Paul and Klein also talk about the way the body picks up on information we’re not conscious of, and when we make decisions we get signals from our body. (I do recall her saying “body” in this context and not the mind, which seems a bit odd and strange. I can understand if the mind picks up information subconsciously and then triggers a response in the body, like a “spider sense,” when we’re facing a decision. But the body picking up information seems odd.)
Overall, I found this to be a stimulating and interesting discussion, one that was also relevant to children’s education.
When I think of really good memorable articles, I usually think of ones where I learn something worthwhile. I don’t usually think of articles that are simply well-written and entertaining (although I don’t often come across such articles).
An Existential Crisis in Neuroscience. (This appeared on Ted Gioia best 2020 articles. I haven’t finished reading it yet, but I’m putting it here, so I don’t forget about it.)
When I asked if a completed connectome (Reid’s note: A connectome is a “complete wiring diagram of the brain”) would give us a full understanding of the brain, he didn’t pause in his answer. I got the feeling he had thought a great deal about this question on his own.
“I think the word ‘understanding’ has to undergo an evolution,” Lichtman said, as we sat around his desk. “Most of us know what we mean when we say ‘I understand something.’ It makes sense to us. We can hold the idea in our heads. We can explain it with language. But if I asked, ‘Do you understand New York City?’ you would probably respond, ‘What do you mean?’ There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’ ”
—
The Borges story reminded me of Lichtman’s view that the brain may be too complex to be understood by humans in the colloquial sense, and that describing it may be a better goal. Still, the idea made me uncomfortable. Much like storytelling, or even information processing in the brain, descriptions must leave some details out. For a description to convey relevant information, the describer has to know which details are important and which are not. Knowing which details are irrelevant requires having some understanding about the thing you’re describing. Will my brain, as intricate as it may be, ever be able to make sense of the two exabytes in a mouse brain?
—
It’s clear to me now that while science deals with facts, a crucial part of this noble endeavor is making sense of the facts. The truth is screened through an interpretive lens even before experiments start. Humans, with all our quirks and biases, choose what experiment to conduct in the first place, and how to do it. And the interpretation continues after data are collected, when scientists have to figure out what the data mean. So, yes, science gathers facts about the world, but it is humans who describe it and try to understand it. All these processes require filtering the raw data through a personal sieve, sculpted by the language and culture of our times.
I thought the following University of Chicago lecture was on writing in general, but it is actually a guide for academics to get published or win grants for research. I actually watched about an hour of this, but I began to lose interest, partly because the lecturer, Larry McEnerney, started to annoy me, which I’ll explain briefly later.
Drilling down and identifying specific problems I have would be interesting, but I don’t have the energy, but I would cite two general problems I had with McEnerney:
1. The very bottom-line lens by which he views and understands teachers, academics and their writing and thoughts. For example, he says teachers read their students’ writing because “their paid to care what you think.” If I were a teacher, I’d find this highly offensive. Teachers have to be paid for this? They don’t care about their students’ learning and ability to express themselves, unless their paid? Is this the way McEnerney feels? And by “bottom line” I mean this in the monetary, transactional sense.
2. He sometimes specifies that the writing he has in mind is for a specific audience (i.e., academic communities or funding sources), with a specific purpose (i.e., to get published or receive funding), but at other times he seems to forget this and speaks as if that form of writing is “good,” in a more absolute sense, while traditional writing and reasons for it are “bad.”
It’s not about academics, so much as writing in a such a way to increase one’s chances of being published or receiving a grant. McEnerney is actually criticizing the way most academics think about writing–how it’s an obstacle to getting published, etc.
As for watching the video, I didn’t think I would watch as much as I did. I found it interesting, at least for the first 50 minutes or so.
I liked the distinction between transmission and ritual, within the context of journalism, that Jay Rosen makes in the following twitter . To summarize, transmission basically refers to getting information to people, whereas ritual has to do with creating “a shared world where we can can dwell.” This often involves reaffirm certain values, facts, and worldviews, while casting aside others. The effect can draw people together, and while Rosen never says this, cast others out.
By the way, this thread caught my attention because I’ve been reading Confucius’s Analects and also watched a lecture series on him. Confucius seemed to really value and emphasize rituals, but the Analects (so far) or the lecture series hasn’t revealed the reasons for this.
My guess, supported by Rosen’s thread, is that rituals bring people together, reinforcing important values for the society–and, I would add, in a powerful symbolic and experiential way. My sense is that Confucius’s main concern is a harmonious, functioning society. This is the end point of all his teachings and beliefs. Rosen’s description ritual (actually, the definitions are from James W. Carey I think) is consistent with this.
Also, if this is the case, imagine what his films would have been like if he put art above all else! (One question that came to mind: Did Welles start putting friends into his films more frequently at a certain point, or did he always do this?)
I really liked this motion comic short (11 minutes) of Wolverine and Spider-man, which is more of character-driven piece. I’ve complained about Hugh Jackman’s depiction of Wolverine, but here’s example of where they get Logan right in my view. The voice actors, their acting, and the dialogue are true to the characters as I’ve known them from the comics. Finally, the story is pretty cool, too.
“ChatGPT is a development on par with the printing press, electricity and even the wheel and fire.”
That’s according to Lawrence Summers, the former Treasury Secretary in the Obama administration. I had heard about ChatGPT before, but I knew nothing about it. (Actually, when I listened to Summers, it sounds like he’s referring more broadly to the ability of AI to think and express itself like humans.)
n ChatGPT’s case, it read a lot. And, with some guidance from its creators, it learned how to write coherently — or, at least, statistically predict what good writing should look like.
Some benefits:
It can help research and write essays and articles. ChatGPT can also help code programs, automating challenges that can normally take hours for people.
Another example comes from a different program, Consensus. This bot combs through up to millions of scientific papers to find the most relevant for a given search and share their major findings. A task that would take a journalist like me days or weeks is done in a couple minutes.
The benefits here are obvious, but, off the top of my head, here are some drawbacks:
For humans, the ability to comb through lots of information and find the most relevant information could deteriorate.
My sense is that different people make different judgments about what is relevant; the ability to do this, which includes making connections with other information, including seemingly unrelated information, can differ significantly from person to person. Will this capability become more uniform if done by an AI?
My sense is that this process can lead to important insights. How will AI impact that?
In a survey, a group of scientists who work on machine learning had even more dire response:
Nearly half said there was a 10 percent or greater chance that the outcome would be “extremely bad (e.g., human extinction).” These are people saying that their life’s work could destroy humanity.
This seems like a big problem, one that that seems blatantly foolish:
The problem, as A.I. researchers acknowledge, is that no one fully understands how this technology works, making it difficult to control for all possible behaviors and risks. Yet it is already available for public use.
To go ahead with something that we don’t fully understand, but could pose an existential threat to humanity (albeit a relatively small probability) seems foolish. And how can we accurately assess the risk if we don’t fully understand how the technology works?
ThisAtlantic article–“The End of High School English”–written by a high school English teacher–has a more grim assessment of the effects on GPT, at least as it pertains to writing in schools–and maybe writing in gene
In a nutshell, when students inevitably get access to GPT, they’ll be able to use it to create essays. Teachers won’t be able to know if the student or GPT did the work.
I share the teacher’s grim view, and I’ll explain that by responding to section of the piece:
Many teachers have reacted to ChatGPT by imagining how to give writing assignments now—maybe they should be written out by hand, or given only in class—but that seems to me shortsighted. The question isn’t “How will we get around this?” but rather “Is this still worth doing?”
My knee-jerk reaction: It is absolutely worth doing. Not only that, it’s essential. To me, learning to write well is the same thing as learning to think way. Indeed, I’m not sure it’s possible to be able to think well without being able to write well. ( That’s probably going too far.)
Knowledge, understanding, and insight are things created. For an individual to develop substantive knowledge and understanding, the individual has process and create this themselves. I don’t know mean that each person has to create knowledge and understanding from a blank slate. Instead, they must digest and build up the knowledge in a way that makes sense to the individual.
To me, writing is the key tool for doing this. I have a hard time imagining anything that can adequately replace this.
In the decades scientists have been experimenting with fusion reactions, they had not until now been able to create one that produces more energy than it consumes.
I still don’t get how something can produce more energy than it consumes. I guess this would mean finding a way to significantly reduce the amount of energy required to smash to the two atoms together–and the bulk of the energy would come from the nuclear fusion?
The development is exciting, but the obstacles do temper my enthusiasm:
Creating the net energy gain required engagement of one of the largest lasers in the world, and the resources needed to recreate the reaction on the scale required to make fusion practical for energy production are immense. More importantly, engineers have yet to develop machinery capable of affordably turning that reaction into electricity that can be practically deployed to the power grid.
Building devices that are large enough to create fusion power at scale, scientists say, would require materials that are extraordinarily difficult to produce. At the same time, the reaction creates neutrons that put a tremendous amount of stress on the equipment creating it, such that it can get destroyed in the process.
It addressed or raised some of the questions I had. For example,
Many of the points raised by the killjoys were correct. While the NIF fusion fuel produced 3 megajoules (a MJ is about the energy needed to power a 100-watt light bulb for 16 minutes) of energy after being hit by a 2 MJ laser pulse, it took over 200 MJ of electricity to drive the lasers. So, from a practical point of view, no net energy was produced.
And brought up or articulated challenges in a clearer way for me:
Furthermore, the NIF, a football-stadium-sized facility built at a cost $3.5 billion, does not remotely resemble a power plant. Despite its immense capital cost it includes no mechanism for translating its fusion output into electricity. Furthermore, even if its electricity consumption could be reduced to zero, at the yield demonstrated it would need to be fired a thousand times a second to generate energy at the rate of a major urban nuclear or fossil fuel electric power station. The best NIF can actually manage is about one shot per day.
But it also gave an easy-to-understand explanation as to why the announcement is still a big deal:
To understand the significance of the NIF experiment, imagine that you are a Stone Age human, living in a society whose only source of fire is from lightning strikes. You observe that if you rub two sticks together they get warm. So you hit on the idea of rubbing them really fast and hard in order to try to light a fire artificially. After many tries and much effort, you manage to light a dry leaf on fire. The energy the burning leaf releases is much less than what you put in with your muscle power. But now you have a way to produce fire on demand. By analogy, that is what was just accomplished at NIF.
Sarah Zhang, one of several science writers at theAtlantic, has an interesting notion in this piece.
Zhang presents an interesting idea here: If certain diseases were all but eliminated due to improvements in better sanitation of water–if no modern society would tolerate unsanitary conditions with regard to water and refuse–if we applied the same thought to airbourne diseases, could we eliminate many of them?
Zhang starts by mentioning seasonal flu. With better ventilation and air filtering in our buildings, maybe the phenomena of seasonal flu would be a thing of the past? Could we have largely been spared of the havoc caused by COVID-19?
One catch is the cost of not only upgrading existing building, but buildings may have to use more energy to achieve this–and my understanding is that buildings already consume a lot of energy. However, think of a more deadly airbourne disease that may eventually come about. If better building ventilation could reduce or prevent such a disease from spreading wildly, the cost might be worth it.
The 60-Year-Old Scientific Screwup That Helped Covid Kill from Wired magazine is tangentially related to Zhang’s piece above. I recommend it–especially if you like a good detective story.
If you’ve followed the news about the pandemic, you know that initially the scientific community didn’t think COVID-19 was an airbourne disease. Later they modified this stance.
The article seems to identify the heart of the error–an error that involves the cutoff point that determines whether a pathogen can infect someone through the air. For a long time, that cutoff point, used by the major health institutions like the World Health Organization (WHO) and CDC, was 5 microns (one micron = one-millionth of a meter). That is, any particle smaller than 5 microns could be airbourne, while anything larger would be considered a droplet (droplets fall quickly to the ground, and transmission can occur through touching surfaces those droplets landed on).
However, the article mentions one problem with this:
Marr, refers to Linsey Marr, a scientist who joined other scientists to study this question. One big component of their study involves answering the reasons 5 microns was the cutoff point. I recommend reading the article to find out the answer, but for those not interested, I’ll post the explanation below.
(spoilers)
The answer is somewhat complicated, but I’ll try to simplify it. First, in the 1934(!), an engineer named William Firth Wells and Mildred Weeks Wells, a physician (and his wife) “analyzed air samples and plotted a curve showing how the opposing forces of gravity and evaporation acted on respiratory particles.” Through this analysis they discovered that particles bigger than 100 microns sank within seconds, while particles smaller than this stayed in the air.
This would suggest that the cutoff should be 100 microns, not 5. Why did the public health community choose 5 microns, instead of a 100?
There isn’t a definitive answer, but the scientists and engineers that worked with Marr provide a compelling hypothesis.
Next, two discoveries seem to matter. First, the mucus in human nostrils could effectively filter pathogenic particles larger than 5 microns–that is, smaller particles would not be filtered. Second, Wells also did some research on tuberculosis and discovered that tuberculosis particles that were 5 microns or less would lead to infections.
Another factor might also be significant. One of the more prominent epidemiologists, at a nascent CDC, seemed to view the notion of airbourne transmission “as retrograde, seeing in them a slide back toward an ancient, irrational terror of bad air—the “miasma theory” that had prevailed for centuries.” Instead, the public health seemed to emphasis personal hygiene like handwashing.
These factors may explain the reason the public health community focused on 5 microns, while forgetting about the earlier study that showed particles up to 100 microns could float around in the air. (Also, important: TB needed to go deep into the lungs to cause infection–the smaller and finer the particle of TB, the more chances of this happening. But other pathogens could do damage without going so deeply into the lungs.)
In a way the 5 micron cutoff makes sense, given that mucus works as a good filter for any particles larger than 5 microns. Because of this, if public health ignoring pathogens bigger than 5, but less than 100 microns is somewhat understandable. Nevertheless, if the article is correct, it’s not a small error, and it should be corrected.
The brain-as-a-computer metaphor has lead to a lot of problems. That’s the claim of science journalist Annie Murphy Paul in her discussion on the Ezra Klein Show. The problem, and disappointment, is that Paul (and Klein) don’t find a satisfying alternative. (She suggests the brain is a like a magpie, citing the magpie’s penchant for using all sort of things to build its nest.) Moreover, the critique of the computer metaphor (and also brain-as-a-muscle metaphor) doesn’t seem entirely valid and her claims and the totality of them seem muddled.
And yet I’m recommending this podcast. Why? Well, I do think she presents some interesting ideas, in spite of the complaints above. For example, she points out the brain seems to function better while outdoors and while the body moves–not just walking or running, but she claims that moving one’s hands while thinking (talking) can help the brain function better. This idea does seem compelling to me, as does the explanation–namely, that we evolved in the outdoors therefore the brain is best suited for operating in this environment, while a person is moving and observing. On the other hand, sitting down indoors for long periods, and focusing on written language is very unnatural, and the brain may have to expend more energy to deal with this situation. My experiences align with this as well. I do think walking does help me thinking (although this applies indoors as well).
Paul and Klein also talk about the way the body picks up on information we’re not conscious of, and when we make decisions we get signals from our body. (I do recall her saying “body” in this context and not the mind, which seems a bit odd and strange. I can understand if the mind picks up information subconsciously and then triggers a response in the body, like a “spider sense,” when we’re facing a decision. But the body picking up information seems odd.)
Overall, I found this to be a stimulating and interesting discussion, one that was also relevant to children’s education.
When I think of really good memorable articles, I usually think of ones where I learn something worthwhile. I don’t usually think of articles that are simply well-written and entertaining (although I don’t often come across such articles).
What I Learned From the Worst Novelist in the English Language written by Barrett Swanson in the New Republic is an exception. I like this primarily because of the writing and because it’s entertaining and amusing.
(Edit: I read this because of music historian/writer, Ted Gioia’s best online essays.)
An Existential Crisis in Neuroscience. (This appeared on Ted Gioia best 2020 articles. I haven’t finished reading it yet, but I’m putting it here, so I don’t forget about it.)
—
—
I thought the following University of Chicago lecture was on writing in general, but it is actually a guide for academics to get published or win grants for research. I actually watched about an hour of this, but I began to lose interest, partly because the lecturer, Larry McEnerney, started to annoy me, which I’ll explain briefly later.
Drilling down and identifying specific problems I have would be interesting, but I don’t have the energy, but I would cite two general problems I had with McEnerney:
1. The very bottom-line lens by which he views and understands teachers, academics and their writing and thoughts. For example, he says teachers read their students’ writing because “their paid to care what you think.” If I were a teacher, I’d find this highly offensive. Teachers have to be paid for this? They don’t care about their students’ learning and ability to express themselves, unless their paid? Is this the way McEnerney feels? And by “bottom line” I mean this in the monetary, transactional sense.
2. He sometimes specifies that the writing he has in mind is for a specific audience (i.e., academic communities or funding sources), with a specific purpose (i.e., to get published or receive funding), but at other times he seems to forget this and speaks as if that form of writing is “good,” in a more absolute sense, while traditional writing and reasons for it are “bad.”
Yeah that would be annoying.
Academics have a weird relationship with language. I’ve said this a lot at work. I’ve said the same thing with “lawyers” replacing “academics.”
But if this thing is longer than half an hour, no way am I watching it! 🙂
It’s not about academics, so much as writing in a such a way to increase one’s chances of being published or receiving a grant. McEnerney is actually criticizing the way most academics think about writing–how it’s an obstacle to getting published, etc.
As for watching the video, I didn’t think I would watch as much as I did. I found it interesting, at least for the first 50 minutes or so.
I liked the distinction between transmission and ritual, within the context of journalism, that Jay Rosen makes in the following twitter . To summarize, transmission basically refers to getting information to people, whereas ritual has to do with creating “a shared world where we can can dwell.” This often involves reaffirm certain values, facts, and worldviews, while casting aside others. The effect can draw people together, and while Rosen never says this, cast others out.
By the way, this thread caught my attention because I’ve been reading Confucius’s Analects and also watched a lecture series on him. Confucius seemed to really value and emphasize rituals, but the Analects (so far) or the lecture series hasn’t revealed the reasons for this.
My guess, supported by Rosen’s thread, is that rituals bring people together, reinforcing important values for the society–and, I would add, in a powerful symbolic and experiential way. My sense is that Confucius’s main concern is a harmonious, functioning society. This is the end point of all his teachings and beliefs. Rosen’s description ritual (actually, the definitions are from James W. Carey I think) is consistent with this.
I like Dust-to-digital on twitter, as it features shorts videos of music, and not just from America.
This response kinda surprised me, but I admire him for it.
Also, if this is the case, imagine what his films would have been like if he put art above all else! (One question that came to mind: Did Welles start putting friends into his films more frequently at a certain point, or did he always do this?)
I really liked this motion comic short (11 minutes) of Wolverine and Spider-man, which is more of character-driven piece. I’ve complained about Hugh Jackman’s depiction of Wolverine, but here’s example of where they get Logan right in my view. The voice actors, their acting, and the dialogue are true to the characters as I’ve known them from the comics. Finally, the story is pretty cool, too.
“ChatGPT is a development on par with the printing press, electricity and even the wheel and fire.”
That’s according to Lawrence Summers, the former Treasury Secretary in the Obama administration. I had heard about ChatGPT before, but I knew nothing about it. (Actually, when I listened to Summers, it sounds like he’s referring more broadly to the ability of AI to think and express itself like humans.)
Here’s what I learned from an NYT article.
Some benefits:
The benefits here are obvious, but, off the top of my head, here are some drawbacks:
In a survey, a group of scientists who work on machine learning had even more dire response:
This seems like a big problem, one that that seems blatantly foolish:
To go ahead with something that we don’t fully understand, but could pose an existential threat to humanity (albeit a relatively small probability) seems foolish. And how can we accurately assess the risk if we don’t fully understand how the technology works?
This Atlantic article–“The End of High School English”–written by a high school English teacher–has a more grim assessment of the effects on GPT, at least as it pertains to writing in schools–and maybe writing in gene
In a nutshell, when students inevitably get access to GPT, they’ll be able to use it to create essays. Teachers won’t be able to know if the student or GPT did the work.
I share the teacher’s grim view, and I’ll explain that by responding to section of the piece:
My knee-jerk reaction: It is absolutely worth doing. Not only that, it’s essential. To me, learning to write well is the same thing as learning to think way. Indeed, I’m not sure it’s possible to be able to think well without being able to write well. ( That’s probably going too far.)
Knowledge, understanding, and insight are things created. For an individual to develop substantive knowledge and understanding, the individual has process and create this themselves. I don’t know mean that each person has to create knowledge and understanding from a blank slate. Instead, they must digest and build up the knowledge in a way that makes sense to the individual.
To me, writing is the key tool for doing this. I have a hard time imagining anything that can adequately replace this.
Breakthrough in Fusion Nuclear Energy Reaction
I still don’t get how something can produce more energy than it consumes. I guess this would mean finding a way to significantly reduce the amount of energy required to smash to the two atoms together–and the bulk of the energy would come from the nuclear fusion?
The development is exciting, but the obstacles do temper my enthusiasm:
12/15/2023
This article from theBulwark was helpful.
It addressed or raised some of the questions I had. For example,
And brought up or articulated challenges in a clearer way for me:
But it also gave an easy-to-understand explanation as to why the announcement is still a big deal: