Site Meter

Sunday, December 19, 2010

light source physics

I've been writing about some frustration with the field of accelerator physics and where I fit into it recently. I came up with something of a tentative solution to this long standing problem, which I want to say a little about here. It may look like just words, but it does represent a change of orientation, and perhaps will lead to a better fit between what I am interested in and can do well, and what I am doing and asked to be doing for my job/career.

Outside of the question of how well research is supported, there has been an additional problem.
My thesis work was on electron storage rings and equilibrium electron distributions. But to actually continue on with that sort of topic is typically defined as accelerator physics, or beam physics, or even machine physics. The problem is that none of these really excites me that much. I was basically interested in the classical mechanics, or the non-linear dynamics, or the statistical mechanics. But not in making particles go as fast as possible. Beam physics is more interesting to me, but if it is defined so narrowly and with such little research support, its still not great. Machine physics also seems like a somewhat derogatory way of describing the topic. It doesn't describe what the physics is about, but only where it takes place. It is the stuff back there, beyond where the real science is happening, inside that big machine.

So I decided to define my own field of work/research as light source physics. This is meant to both exclude and include. One can say that light source physics (for synchrotron light sources, anyway) relies on accelerator physics and beam physics. But one could also say there is some overlap with accelerator physics and beam physics. First of all, one needs to get the beam there in the first place. That's the accelerator physics (but of course, its much more. Its engineering, its control systems, its infrastructure...) Then one needs to know about general behavior of relativistic beams of charged particles. This would be beam physics. Light source physics implies that the purpose of this electron beam is the radiation it produces. And furthermore, the dynamics of this beam is only half the story. The other half is the light that is produced. The electrons produce electromagnetic fields, or perhaps photons, or perhaps a distribution of light. I'd say that until this light exits the front end and heads down the beamline, we are in the realm of light source physics. The source of the light.
Thus both accelerator physics and other applications of beams are excluded. Colliding beams are used for particle physics. There are also medical purposes for beams. There is electron microscopy using electron beams.

Finally, whereas beam physics is shared between synchrotron light sources and colliders, on the radiation end, we could say its shared with xray optics particularly, and optics more generally. So flashlights and LED's and the sun, and fluorescent molecules are also light sources. And its within the realm of these topics that the definitions of brightness, brilliance, flux, and all that has been developed. So its not cheating to say that they are part of the field.

As for its need, one can look to a site such as Lightsources.org and one finds all about the applications, but not too much about synchrotron radiation and even less about the electron beam. So, though unorthodox, it seems to me a gap that could use development, but with different emphasis and theory than comes to mind with accelerator physics or beam physics or machine physics. In particular, both single pass and multipass is included. FEL's can be included... for now, just my own personal definition. But I think it makes sense. Talman's "Accelerator X-Ray Sources" is I think a good reference to orient some of this.

Just briefly, so, the picture is going from an electron beam to a photon beam. The electron beam may be described with Twiss parameters, and more general coupling formalism. I believe one can describe the photon beam in the same way.

Anyway, this blog has been somewhat a strange mix of personal and professional stuff. Since I hope to see how well I can do as a light source physicist, I decided to create a separate blog called Light Source Physics. The present blog will stay a bit more personal, and amateurish, venturing briefly into topics I know little about, but find interesting. Perhaps the other will develop more substantially. Or perhaps by splitting into two, I'll lose interest in blogging on either one... we shall see...

Another idea is to focus more on the philosophical aspects of things here, and more technical on the other one. Light may be described with a Wigner function for example. I'm curious what it means. Its supposed to be the closest to a representation of the distribution of photons. But it can go negative. Its interpretation is also difficult in quantum mechanics. So I'm curious about it with light. Is it a quantum mechanics issue? Is light transport a good context for thinking about basic non-relativisitic quantum mechanics? Is symplecticity an important concept in light transport?

The other thing to do with this blog is to continue with the mess I've been working on and describing related to accelerator physics, and beam physics. But hopefully continuing in a positive direction. Oriented towards getting a good code, and a reasonable set of references to help understand these things. It is a part of light source physics, after all.

Thursday, November 18, 2010

slow

Today I feel frustrated. There's a sense of crystalization. Job splits into two parts- a work part and a research part. Not exactly, and its still a little vague, but it feels like its heading towards this.
You create something- a possibility?- and then you live with it. But in the process you don't pursue other options, and I find a heaviness in taking on the option that I've created. I was talking to my dad a few days ago, and describing some of my thoughts and efforts at cleaning up old messes and turning my field into something I can work with. He recommended a book he was reading about two brothers who can't throw anything away, that live in an apartment with everything they have ever collected.

Is this what I do? Just simply refuse to move on? Keep on working with an unworkable situation? In the language of the mess- is the mess simply too big? And even if its not, will there be anything interesting left after the mess is clean?

I feel like its all come to a stop. I reached a point of unstable equilibrium and just sit there, but its a very gradual slope away from this point. There are things I can work on; they are somewhat useful but not urgent. There are half-way interesting research-like questions. But this is my own internal process. Does it match expectations, and categorizations for achievement?

All this digital life continues to bother me (though I participate, such as with this blog). I continue to hear from the computer scientists about optimization and automated search and categorization. And one's activities on social networks become discrete. "So and so did this." "Now they did this." "Oh?" "Yes, they did do that." "Now so and so did this." "Really. That's great! So glad that you told me." Someone tells me about classification of human actions. They describe the "atoms" of action. Tom Waits smoking a cigarette and drinking coffee from a mug, the motion of hand away from mouth, "atomic".

Wednesday, November 10, 2010

work

Back to this theme of work vs. research, I guess I want to put a good word in for the difficulty of work. (Previously I said researchers may work harder.)

I'm trying to calculate something for a bunch of stored measurements. The formula is known, and its programmed into a matlab code. I really just have to get the parameters and plug them into the formula. What can be so hard about this?

Well, first, there is a fair amount of uncertainty in some of the parameters. It takes some work to cross check with various sources to get the parameter values reasonable. Then I need to learn how to access the data and to manage transfer of data and programs I write across a varied computing landscape. Finally, I need to choose which data to actually analyze such that the results will hopefully tell some kind of story out of which we can learn something.

A lot of these steps are the same in research. But, even though there is a lot of uncertainty, in some ways there is more certainty, because there is a research program. A set of questions to be answered. In the case of almost research, where the job is to understand something and keep it going and maybe make a few improvements, it may be even more open ended than research. Not to say I can't define some research projects within this, but there is a lot to be done that is really about gathering together data from disparate sources to understand and diagnose problems. And I wouldn't call this aspect research per se.

Tuesday, November 09, 2010

models

I enjoyed reading this paper today by Ronald Giere called "Representing with Physical Models".
Its an interesting thought that one can consider a graph or other representation of data like a 3-d image as a model in a similar sense that one has legos for a model car, or lincoln logs for a model house.

In the process of working through some problem, I often want feedback at an early stage, and so I produce some kind of plot that may partially get at what I want to say, or where I want to go, and I show it to a supervisor, or someone else. Its always an interesting process to have someone else look at your plot and take it as it is. For me, it is a termporary representation of some data I've been playing around with, but for someone else, it becomes an object contained within itself. They look at its boundary, ask about its imperfections, and describe the picture that it paints.

In the article, Giere describes representation via theoretical, physical, and computational models. His example of a theoretical model is a harmonic oscillator, his physical model example is Watson and Crick's colored balls representing DNA, and for a computational model, it is a 3-D image picture of a protein based on theoretical calculations and some protein data. He wants to say that these are all basically doing the same thing. That together with a person to do the interpreting, each of these can be acted on in various ways to learn something about the real system.

I guess this makes some sense to me. The nice thing about a toy model of something is you can play with it, get some feeling for it. You know harmonic oscillators have a fixed frequency, you can picture them oscillating in your mind, and you can even imagine the force they push against you as you try to compress the spring. Similarly with the real balls representing DNA and the 3-D image, you can play with them and relate them to things you know in the world. So with a plot you produce. Its limits and its potentialities may come alive in the viewers mind. It doesn't tell all, but it gives something concrete to hang on to to start building a picture of a given something or other you're trying to understand.

In my last post I said that physics gives us a bunch of models which have been used to describe electron storage rings (the example I focus on because I work on this, and want to clarify certain messy aspects of it). I think maybe some of the difficulty in this field is that computational approaches were developed, but somehow the last step of using them to make models didn't happen so well. One has a picture of a map with a resonance, but there's no good software to really turn this into a model where one can play with it and get a feel for it. (I suppose frequency map analysis software may qualify in this sense. One gets colorful pictures in which the resonances show up in the tune diagrams.) The concepts are there, and the software has been written (e.g. FPP) but not many people know how to use it, or how it relates to the phenomena of storage ring maps. In this context, model has usually meant the elements going into the computer code, and I suppose that's the theoretical model. But with the incoming model being very complex (so its hard to play around with in one's mind), and the software not being easy to use and visualize and relate to familiar things, one is left without good conceptual tools to understand some of these phenomena.

Monday, November 08, 2010

system, environment

So physics provides us with interpretive models. We have ways of translating things into mathematical structures. Let's take this example of the electron storage ring.
We have magnets. These are big, heavy metallic objects with current running through them, shaped in ways to produce magnetic fields. So we line these up and put them in some configuration. Now, there's a certain region of space that maps out a doughnut-type shape inside all these magnets. Physics gives us the model of a magnetic field at all places inside this doughnut.
We have devices that mesh well with this picture. They measure the field and we basically assume that at a given time, the field has some value everywhere, and one can repeat measurements and get the same value. Then, to this, we throw some matter in there. We interpret that matter in terms of point charges with various masses and charges, such as electrons or air molecules.

The magnets and magnetic field is the environment or the background. The charges now move in this background. Now, depending on the needs, one can use different descriptions of the dynamics of the electrons. One can use classical electrodynamics to describe the motion of the charges, and the electric and magnetic fields they produce that may then also act back on those charges. I'm not entirely sure the status of the self-force and consistency within classical E&M. But I think its basically understood how to deal with it.

But actually, we need a little more than classical E&M. We need a bit of quantum mechanics. The radiation the electrons give off comes in lumps, and the lumpiness actually has an important effect that we can't ignore. Without the quantum lumpiness, for an appropriately set-up storage ring, the electrons would all end up at the center of the potential. Classical E&M says there is a damping mechanism that causes this to happen. Now, the interaction between electrons would limit the size of the resulting beam to a very small, finite size. But it turns out that the quantum lumpiness causes the beam to be much larger, and together with the damping mechanism, sets the size of the electron beam.

How do we treat the lumpiness? We use quantum mechanics (is it really full-blown QED? Or some semiclassical approximation given the emission spectrum of the electron?) to provide the diffusion coefficient. This turns the Lorentz equation into a stochastic differential equation. In the case of linear dynamics and constant damping and diffusion, the result is a Gaussian probability distribution, which when considered for an ensemble of electrons results in an actual Gaussian charge distribution.

Once the magnetic field has been set, and one is considering charged particles, there are other formulations for describing the classical dynamics besides the Lorentz force law. In particular one can use Lagrangian or Hamiltonian mechanics. Let us take the lead of Michelotti in describing this framework. He begins with the pendulum to introduce model systems that have the properties we need that the maps around the storage ring will have. He emphasizes with the pendulum that the phase space may not be R^n, but is a manifold. In chapter two he introduces linear and nonlinear models. He discusses the Hopf map and the Henon map, and gives the ideas of ergodicity and some other probability concepts such as partitions. So in general, we are actually in the realm of dynamical systems. And where does Michelotti end up? By chapter 5, he is discussing perturabtion theories for Hamiltonian dynamics and tries to give description of the Forest, Berz, Irwin normal form algorithm, which may contain isolated resonances.

So this is a long path stretching away from the magnets we see and the measured fields. It provides tools. A path to walk on. But there has to be a pulling from the other end. We have to know where we want to go. I would say that one usually wants to go to questions of stability. One wants to know whether a given bunch of electrons moving through this doughnut will last very long or not. And unfortunately, the elaborate normal form perturbation theories don't tell us this. And the same goes for the numerical implementation of these perturbation theories. One can compute resonance strengths and tune shift with amplitude to arbitrary order, for a machine with all the appropriate misallignments and field errors, and the full Hamiltonian, and one still doesn't answer the stability question by the perturbation theories.

But these are nice paths. And the tools are good tools. But without some pulling from the other side, the use of these tools gets lost. One doesn't know that sometimes one needs to develop new tools, or maybe give up on full understanding and just track the particles and see what happens.
Between the end point of injection efficiency and Touschek lifetime (momentum aperture), and the beginning or magnets leading to particular paths through classical mechanics with brief borrowings/harvestings from the quantum, one will simply get lost on these paths.

Friday, November 05, 2010

hot topics

I commented again (29) at Cosmic Variance on a post about "Physicalist Anti-Reductionism" which included a debate between John Dupré and Alex Rosenberg. Sean seems to minimize the importance of the topic, finding it "the most boring argument in all of philosophy of science."
To me, it gets back to this kind of split I experienced when reading Nancy Cartwright. I found it hard to do physics when I didn't have this grand picture of it in mind, and instead having a skeptical approach. Can one be critical of something and excited about it at the same time?

But actually, to me, reading more skeptical philosophy of science is kind of like finding an honest way back to appreciating some of the stuff that originally excited me.
Maybe I'm just trying to justify choosing a not so "hot" topic in physics. Condensed matter theory, or particle theory or cosmology might have been sexier in some ways. Maybe I chose a purposefully boring topic because I thought it would be more honest.

Anyway, I was just realizing that this sense that a kind of reductionism is wrong has made me just not think very much about the components of things. Yes, there's a real sense in which we're made of molecules. And they are pretty cool. And there's a lot of them. And people make pretty pictures of them. And understanding a mechanism is pretty exciting.

The basic problem I have with accelerator physics is that try as I may, I can't put it in the same bag of exciting stuff as I've seen a lot of other topics before. Thinking about protein structure, or photosynthesis, or quantum mechanics is fun for me. But thinking about dispersion functions and chromaticity and tune shift with amplitude and momentum compaction factors... is just hard to get excited about. There were topics that originally seemed exciting. There's basically a new approach to classical mechanics that is developed in the early accelerator theory- a Lie algebra approach. Then there's the stuff with power series, whose early advocate describes in terms of differential algebras with connection to non-standard analysis. But in some sense, these mathematical abstractions are a bit overblown (particularly the latter). The reason I say they are overblown is that the problem has not even been solved. The real non-linear dynamics problem is that of the dynamic aperture (the stable region of a non-linear map) and as far as I know, this isn't really a solved problem. So going out so far into a given formalism when that formalism doesn't even solve the main problem seems a little too much.

Anyway, I'm not giving up. I like the classical mechanics. Synchrotron radiation is something I can put in the bag of exciting stuff. And the awful messy code situation may be able to slowly improve. So that's sort of the package. We've got some kind of nice classical mechanics. A bunch of somewhat useful definitions of things that are measured. A bit of a computer code and sociological infrastructure difficulty, and then some cool stuff with synchrotron radiation. Its a topic. It may be more fun to think about ecology or species of mosses, or the definition and validity of reductionism. But at least the topic is becoming less awful. Less ugly. Back away from all the extremists with their unfinished pyramids to build, and one has a topic in need of some sprucing up and simplification, but honorable nonetheless.

Wednesday, November 03, 2010

advanced light sensing

A funny quote from p. 5 of the book "Elements of Synchrotron Light" by G. Margaritondo:
As new-born babies, we begin to learn by 'seeing' things with light, which consists of electromagnetic waves. As we grow up and become more sophisticated, we can use different types of electromagnetic waves to explore different properties of the world around us: for example infrared light to study atomic-level vibrations or X-rays to study the atomic structure of molecules.
Now, I'm always looking for how to "tell the story" of synchrotron light sources. But this is an odd angle! We start our lives by seeing with visible light, and then after we become more mature (as a synchrotron light experimenter), then we add infrared and X-rays to the spectra of usable light to learn about the world!!

Saturday, October 30, 2010

some generality of science

I've been on a critical direction for awhile, and looking to find a more constructive direction.
I'd like to get back to some of the excitement I've had for science. And it really covers a lot of good stuff.

So what are some of the important things we know? I think Feynman said that he thought the most astonishing and important knowledge was about atoms. Here is something like a universal knowledge. We can take anything we find, anywhere, and if we break it down in a variety of ways, we find that there are atoms that made it up. Maybe often you get molecules instead of atoms. But this idea of breaking material apart and always getting something of a fixed set of elements. This does seem to be pretty significant knowledge, and to always be true. Does this mean that atoms are the "underlying story" of everything? Maybe.

So this seems to be a kind of reductionism defined operationally. Stuff can always be broken down into atoms. (How to make it more precise? Step 1. Find something. Step 2. break off a little piece. Step 3. break off a little piece of that. Step 4. heat it up? explode it? ...)

What else do we have? We have light. There are radio waves, and visible light, and ultraviolet waves, and xrays. You can't really talk about light in the same way you talk about matter. You don't break stuff apart and find light at the bottom. Stuff goes through some transitions, and light is emitted- it makes other stuff go through transitions.

I really don't want to end up with a network here, with nodes and messages being passed between the nodes. Yuck. I'm sick of networks.

Anyway, yeah, there's stuff, and there's light.

What about a beautiful forest- an intricate ecosystem? Stuff and light? Does that get us very far in understanding and appreciating it? More work for another day.

Wednesday, October 27, 2010

research infrastructrure

In some ways, people doing research may do more work than others. You take the problem with you all the time, and you really put a lot of yourself into solving it. The benefit to this may be that you have freedom to pursue something that really interests you, and your work and passion may be aligned.

There's a danger when a field does not have a strong research culture, but still has a research component. If most of the work to be done really isn't research, then what is and isn't research may be confused. A person programming for a company doesn't think of themself as doing research, but rather as problem solving. The difference is that much of the framework is predetermined.

In a field without a strong research infrastructure, one is expected to make the framework oneself, but one will never really do something new, because the information is just badly managed. The problems have already been solved long before. If one is supposed to be doing research, but is really just catching up to where others have already been, or cleaning up old messes, this is not very healthy. Instead, this component should simply be called work, a set of objectives should be set up, and the work divided amongst those doing it.
(added...)
Summary: if its work and not research, then there need to be very clear goals and it should be finished, even if imperfectly. Depth and creativity and perfection is not well spent on something that cannot support real innovation. Real innovation will look bad at first, and will take awhile to get somewhere and perhaps other people to finish things at a later time. I need to learn to separate research from work. I seem to never quite learn this lesson, and it gets me again and again.

Sunday, October 10, 2010

beam distribution, lifetime, synchrotron radiation

Ok, closer to what I should actually be working on...
We have an electron beam with a variety of interactions. At high energy in a storage ring, mainly you get a Gaussian due to damping and diffusion processes from synchrotron radiation. The self interaction mainly manifests as a beam lifetime- scattered particles are lost in what's known as the Touschek lifetime. Also, if there's an aperture close enough to this Gaussian, then particles are lost through this diffusion process in what's known as the Quantum lifetime.

Suppose we have a non-Gaussian beam. How did it get this way? What does this say about the lifetime?

We can treat the synchrotron radiation effect on the distribution via the Fokker-Planck equation. Can other noise processes on the beam also be treated via the FP equation? What would a non-Gaussian distribution do to the emitted synchrotron radiation?

Finally, does anyone care? We use this radiation for all sorts of experiments. Which experiments care about lifetime? Which care about beam size? Which care about the coherence of the radiation?

phil sci, dirac

From Taking up Spacetime is a link to the philosophy of science preprint server, here.
I've been browsing a bit and enjoying reading some papers. I found this paper on interpretations of the Dirac equation here (M. Valenti, 2008) which I've skimmed a bit. He talks about using QED to describe the Hydrogen atom, which I'd be interested to understand better. It does seem that mostly QFT calculates S-matrix type stuff, and bound states are more foreign. If NR QM and the Dirac equation really come out of QED, then it should be able to deal with bound states. I vaguely remember something about "resonances" (related to the complex poles) of the S-matrix being the bound states...
Maybe too hard to understand right now, but interesting stuff anyway.

(added... Ok, here's an interesting quote related to this reductionism, model building stuff:
In this way we are not restricted by Haag’s theorem – and so we can retain the concept of quanta in the description of interactions – because, from a physical point of view, the Lagrangian of quantum electrodynamics does not provide us (contrary to what from a mathematical abstract point of view might appear) with the possibility of describing a system of (undifferentiated) interacting Dirac and Maxwell fields, but with a way of developing models that describe in a limited way the interaction between the fields.
Valenti, p. 14)

Friday, October 08, 2010

inner space

I was complaining a little while ago that with all this network stuff, blogs, hypertext, RSS, info overload etc. that the challenging thing may be to relearn to read. I certainly noticed that I haven't been reading novels in depth in awhile, but this probably has multiple causes.

Anyway, it seems that in order to be able to really take something in, we need to spend time on it, and not be distracted. And we need to have the space for it. We need to be relaxed and not overloaded. This sounds like conventional wisdom, but it can be easy to forget. We can see the effects of reading more, but sometimes forget the value of reading less, more carefully. Allowing the words, sentences, ideas to sit there for awhile, to connect up to each other, to try to see what bigger picture the author was constructing, and to relate the text to our own experience and ideas. This is an internal process that can't be sped up by skimming, or by reading more background information, researching every last connection. I know some people who analyze texts by doing these word clouds, and finding out which words, or phrases are the most prevalent, and want to try to extract understanding from such an approach. I imagine you can get more sophisticated and start writing programs to understand things for you, and just give you some kind of summary.

It reminds me of tools in math in physics such as calculators or programming languages, compilers, etc. Here we let the machine do the work for us. Is there a benefit to doing the work ourself? I know that I learned calculus by doing many, many integrals, on paper, and learning the ins and outs of various tricks. Doing this builds up intuition and a whole workshop that connects to your other thoughts. You get a lot more out of it, than just the ability to solve a specific problem. I think in addition, you get the possibility to have a sense of which problems might be solvable, and ways in which definitions might be shifted to get useful results.
You become richer, and its more fun.

Anyway, since language and ideas are important to me and the source of lots of enjoyment, its sad to see ways in which some kinds of thought may be bypassed. My general approach is to try to make sure I can do something by myself before I get a tool that can do it better and faster.
I do use Mathematica to test out whether an integral may have an analytical solution, but I feel good that I can (or at least used to be able to!) do it myself. (Here's a page of an accelerator physicist with some quotes on this topic.)

I want to be able to read in depth, think in depth, and calculate in depth, and to take my time with it, even if someone or thing can reach the same conclusions faster or more reliably. The inner world involved, and the benefit to me as a person to this seems incalculably valuable.

Wednesday, October 06, 2010

positive

Still reading Nancy Cartwright sometimes, and my tendency is towards criticism and thinking about why things are not as big a deal as people say they are.
But this isn't so great a way to do things/live life at some point. Actually she writes in one of her books that people tell her that her project is not very inspiring. But she does have a positive project I think, along with the criticism.
Anyway, for me, I do like physics a lot. Better to find some real questions to work on and try to make progress. The critical approach can always be there, but if its all there is, its not so productive, and probably leads to mistakes also...

Wednesday, September 29, 2010

seriously, completely understood

Sean presses forward with "Seriously, the Laws Underlying the Physics
of Everyday Life Really Are Completely Understood
"

I would put an emphasis on the word physics here. I think there's an
interesting question about the definition of physics. People who
think of themselves as physicists would of course like to define it so
that it is as powerful as possible. Reading Lubos a while back, (finally it got to be too much, and I found reading him, and the few attempts to engage with comments just too upsetting) one
could see this activity at play in the realm of string theory. String
theory was to be the best, most powerful theory out there. It didn't
matter that it wasn't well defined or maybe covered a variety of topics. Future work would go into its definition and elaboration. What was important was that it was known
ahead of time that it was all powerful.

Clearly the standard model and general relativity are powerful frameworks that are extremely fruitful for building models to describe and predict the world. And there do seem to be some facts about how we can take anything we find and break it apart and find the same underlying stuff. And if we put that stuff into an accelerator or in various
configurations, we can predict what it does.

But if the real point here is to emphasize the power and generality of the standard model and general relativity, then why talk about the "physics" of everyday life? This is the fundamentalism that Nancy Cartwright fights against. An effort to put disparate activities and types of argumentation together into one whole and say that it somehow covers everything.

Of course, I'm also sympathetic to this view of physics (or perhaps one can generalize to science, as well) as a unified extremely powerful discipline, and it was the faith that pulled me through graduate school. One thing for me that was discouraging (to continue a thread from Wimsatt's description of finding out how important "jerk" was) was when learning about quantum field theory and renormalization. QFT was presented as a generalization of the non-relativistic quantum mechanics we'd learned. But then it was shown that one actually got wrong answers and had to patch things up with this method called renormalization. If this was the fundamental theory, and it still required this much tinkering to get results for particle physics experiments, it seemed plausible that it might require different tinkering to apply it to correctly to limiting cases such as a helium atom. In some sense I hope I'm wrong about this, but it certainly was never presented in a coherent way. Foundations of QFT don't really seem to be too popular though, or seen as really open topics.

update... now the final (?) installment: one last stab.
I added a comment about how the reduction of helium to the standard model isn't usually done in a chemistry class, and it actually seems pretty hard. We're still trying to get protons out of QCD, with lattice QCD. I just wonder how much theories change as they pass from one discipline to another.

Thursday, September 23, 2010

All figured out

Here is Sean at Cosmic Variance patting ourselves on the back for having all the physics underlying our every day life figured out. I know what he means, and in some sense its a good point. I've tried to make this point to people before. That the laws of physics that we know, explain everything. Everything. Its a good point to not nitpick about the mass of the Higgs or the existence of sypersymmetry, or an accurate framework for quantum gravity.

At the same time, here is the reductionist mentality writ large. I'd like to see more debate and clarification on this point. Is there a way to make this point that is not so arrogant and overblown? Its certainly an accomplishment, and there's certainly a precise statement to be made, but how to make it without claiming too much, or minimizing the work and value and richness involved in "the playing out" of these laws?

Wednesday, September 15, 2010

organization

I've never been very good at organizing my life systematically. Somehow things seem to work, but I am also always left with a feeling that there are so many loose ends. Part of me wants to keep pushing to find a system, and part of me says- "look, what you are doing seems to work, so don't worry about it." I've heard the French described in this way, and I've seen it myself. There is a big mess of redundancy, it looks like a disaster, but somehow when you really need something, it tends to actually be there and work, somehow.

For example, should I make a To Do list? Where do I put it? Do I write it on a piece of paper and carry this everywhere? Or put it up on my wall? If I carry it, then I may always be worried about what I have to do, and never relax. If I leave it at home, I may not have it when I most need it. Do I write it in a file or using a program on my laptop? Then, when my laptop fails, all may be lost. Do I use some networked site like google or some other service to manage my data? Firstly I need to learn their system. Secondly I am then dependent on a large company that may not have my interests at heart. And these companies are getting too powerful, anyway. Do I really want to be a part of that system? Of course I use their services sometimes, but do I really want to make this the sole point of contact and system for my data?

So what about a more varied system? Where the same information is written on paper, in my email box, on my laptop calendar program, and perhaps sometimes in a google calendar or some other network type application. Having a system like this takes some work, and I think needs to grow organically in some sense. It needs to be robust. (Yes, still processing some of Wimsatt's concepts.) I think somehow I never developed such a system, and any attempt to produce one too quickly runs into some of the problems I've mentioned. When life is too busy, any flaws in my system are made worse. I try to write things down, and the proliferation of paper is worse than the organizational benefit from doing it.

Maybe it really is time to buy one of these organizer things.

Wednesday, August 18, 2010

what we are afraid of

So I've been reading W. Wimsatt's "Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality". It has a lot of material on reductionism, which is quite wonderful, since its always been a topic that fascinates me and scares me. He provides tools to get around the various overly crude reductionisms- the "nothing but"isms. One challenge is that its rather focused on philosophy of biology, and I don't know this literature so well. But I'm still appreciating it quite a bit. In particular, Dawkins' reduction of all natural selection to the level of the gene ("the selfish gene") is something that scared me when I read it, and would like to find more articulated criticisms within the group selection literature.

It occurred to me that extreme reductions may be behind a variety of fears we have. We are afraid of being too machine-like. We are afraid of being too computer-like. We are afraid of being too tool-like. We are afraid of being too money focused. We somehow know that to view all as "mechanism", or all as "computation" or "communication" or "economics" is a kind of simplification that will make many things we value rather hard to articulate. (A friend was recently telling me about his "all is optimization" theory of life, which of course is the very same type of beast.) So the fear is perhaps our reminder to ourselves not to take the (maybe useful) ideology too seriously- a nagging suspicion that everything we value is being defined out of existence.

This occurred to me as I was thinking about my own life and how to form patterns that are positive (a self-help/therapeutic approach to life). I was realizing that I have a deep antipathy to planning things too much. "It makes me into a machine." I say. Or perhaps, "it makes me into a computer."

To say much more, I'd either have to be more explicit about myself, or try to be more precise about these philosophical arguments and societal forces. But I'll stay in this gray zone where hopefully I've still said something useful. (I know its a bad habit of staying in this gray zone, and I ought to start moving out.)

Tuesday, July 13, 2010

language

To have a nicely working system, people should know the names of things. And computers and programs make this even more difficult since they are so inflexible when it comes to names.
So put oneself in a multi-lingual environment with disparate computer codes, and naming of things becomes a difficult job.
I'm trying to get my own mess sorted out. I have directories with names that classify.
I have files with different extensions implying the programs that can read them.
And I have different projects I am working on. I keep redefining the projects, so I keep renaming them, and there is overlap between the different projects. And I seem to have multiple copies of the same files in different locations.
Basically, there are settings for devices, there is a corresponding parameter in a model, and then there are measured and calculated quantities for these different settings.
Sometimes settings are named based on the day they were used to link them together with other parameters for those days. Sometimes they are named with something like the word 'nominal' to try to push these into standard settings.
Now, I might think I am obsessive about this sort of thing except for the fact that I really have not yet created a working system. So its really just unfinished work that I'm having trouble moving forward on.

A final point about this kind of work. There are two aspects to this kind of work. First there is this clarification work. Work to try to make language that works well for the different people involved. Then there is actually getting stuff done. For the latter purpose, there will inevitable be arbitrary choices made. The area is complicated enough that its not always clear what the right thing to do is. The point is that you should know what you did. So if someone asks you, you can tell, and they can repeat (or you can!) if necessary. So you need to know what you did. And depending on the kinds of people you are working with, you may also need to have answers ready to defend the arbitrary choices you made- or at least a strategy so that you could easily try something else if you can't defend a choice.

There is a conflict between doing something definite and coming up with an appropriate language to describe what you are doing. Doing something definite will often make you make choices about language before you are ready to do so. Walking the line between these two things is the key to move forward but also bring others along with you so that it can help the total understanding and group good as well.

Thursday, June 24, 2010

tolerance

I'm trying to do some detailed work. I have to create a bunch of files with settings of currents and magnet strengths, and then load these values into a model of the machine and calculate quantities based on tracking particles through the machine.
I'm realizing that I have an approach where I assume that everything I do I might make some mistakes. If I focus really hard, I can make fewer mistakes, but this is exhausting, and its hard to think creatively while doing this. So I try to set up my systems so that they are tolerant to making mistakes. This means that there are checks later on, and reviews where I can catch the mistakes I make.
I think that other people don't work this way with this kind of work. They are more careful, they don't make many mistakes, and once things are checked, they leave things as is for fear of messing it up.

This to me is another aspect of "the mess". Its an area, where in order to turn something not into a mess, I have to work much harder than I normally would. If it were not a mess, and I had no problems with the systems in place, then I could just work in my concentrated mode, not make mistakes (or very rarely), and get a lot more done. Since I allow myself to make mistakes, then I have to change the system itself in order to accommodate this, or else face the consequences of making mistakes and looking bad as a result.

Tuesday, May 18, 2010

relearning to read

The institute for the future of the book blog discusses a slow viewing of a Herzog film
and states
The problem of availability is something that seems increasingly to have been solved. To view or to read well is another kind of problem. In the past, when there was an economy based on scarcity, this might not have been as much of an issue: whatever was available was watched or read. Now we need to think about how we want to watch: we need to become better readers.
I've been chewing on this for awhile. This morning it struck me that perhaps this is a much harder task than we may think. And perhaps the technical challenges associated with bringing about the new mode of availability are trivial compared to the human and social challenges of reclaiming the same depth we had and perhaps developing it in new directions. I, for one, have certainly become worse at in depth reading in recent years. Perhaps there will be long term benefits. But I'm most often struck by a sense of loss.

To me, there are two separate questions. The first is about the nature of hyper text, and what such a literature might mean. My gut feeling is to reject it, and somehow it feels like this is related to the integrity of personal identity, and the linearity of time... (problem much simpler reasons are available)... were there ever choose your own adventure stories that reached the level of high literature?

The second question is about how to read single texts. Do we take notes? Do we spend hours at a time, or minutes? Do we always read linearly, or sometimes skip ahead? I don't know why these questions should even be asked, except that with internet reading I've taken to all these habits, and perhaps they should be clarified if not rejected on an individual basis. I think for the most part, these are bad habits, akin to seeking out cliff notes for a book and not actually reading it. The danger of mistaking the map for the territory?

One final point is that there's a small flaw in the claim that the availability problem has been solved. What this means is that digitally, texts are more and more easily available. However, at this point, I really don't feel like e-readers are good enough. Maybe I just haven't given them a chance. But in any case, the truth is that if one prefers to read a printed text in a convenient book form, then the digital existence on one's devices is not the same. And independent book stores provide a filtering process that forms a community and provides these books for immediate purchase. I see the new system as an alternative, but calling it the "availability" problem hides the changes that have already taken place and will continue to take place with respect to what books mean, how they get to us, and the connection to the author.

I guess I just really like libraries and bookstores, and wonder what will happen to these institutions and how the social role will develop. Probably much has been written on this, but I think its easy to think that somehow the technical part of digitally distributing text has somehow done away with a large amount of preexisting objects, culture and ways of thinking about things.

Sunday, March 21, 2010

messes

How do we deal with being stuck in a messy situation?

The nature of a mess is that you put energy into fixing it and it is still a mess. Perhaps one can slowly turn a mess into not a mess, but during the process it will be a mess. The key is to not look for completion, and to make sure one has other resources and interests. Some kinds of life projects give regular rewards, and have regular moments of clarity and transformation. A mess on the other hand, is a constant drain. Again, the work may still be valuable, and in the end, something good may come of it, but for large amounts of time, no such rewards are there, and not being draining may be the best possible scenario.

When one is involved in such a project, it is rather frustrating because one is often asked if one is passionate about it, and if one loves it. But really, all one can say is that one is trying to improve a mess. One can barely even talk intelligibly about it because the nature of the mess is that it cannot be clearly defined, and in fact incoherent approaches to characterization of the issues may abound. Thus, not talking about it may be the clearest and most honest approach available, but this leaves oneself in a state of mystery where one is exhausted, but cannot say why. Historical and psychological analyses of those involved may also help, but without the grounding in other healthier areas (such that its essentially just an approach of humor/compassion), one may again be led to increasing the problem, rather than improving it.

Living through a mess is difficult because one's faster/more direct analytical facilities are led astray. The one thing that one may do is to constantly remind oneself that one is involved in a mess, think to somewhat more healthy situations, and not push too hard. Put energy into it, notice that it is still a mess, and then recover from that effort, having hopefully pushed things along incrementally such that they will be a bit better then next time you return.

Friday, March 12, 2010

slow processes

I think that watching computer technology develop sometimes is disheartening for us humans.
We watch things go faster and faster, and see any process which once took an hour and a person to help with it, now take a single function call in a high level interpreted programming language, and a few seconds of processor time.

Seeing this, and looking at ourselves, and our own development, its hard not to feel impatient with ourselves. Some things take years, and lots of work, and the progress is still only partial. Things like developing friendships, coming to terms with our past, and finding an appropriate way of life, profession, etc. How do we keep our dignity and allow ourselves our slowness when many things seem to be faster and faster? I think there is some flaw in thinking that devalues something for its slowness. When we have a beautiful tree in the yard providing shade for years, do we put this tree down because it took so long to grow? No, in some sense the time it took to grow adds to its value.

So what is the difference between a tree and a Fourier transform algorithm? I don't think many people would wish that their basic programming tools could run a little slower. We like responsiveness.

How do we defend slowness? Perhaps the point is again back to the question of dominant natural and technological language. This computer age asks us to put all things in terms of algorithms and processes. But our own lives are mysteries in some ways. When we translate this into computer language, we have inevitably left things of value out. In the same way that when we translate into economics language, we also leave things out. Its certainly worth the effort to try to understand why some things are slow, and all of the different things that are involved in the process from an algorithmic perspective, but one should also just accept that the translation is only partial, and slow things of great value exist.

Thursday, March 11, 2010

separation of duties

There have been various times in my life when I've felt "smart". Sometimes this goes along with some context in which others think I understand what is going on. When I realize that in some context I am considered "smart", my usual response is one of something of the sort: "wow, but I am so confused! I know so little!" Now, one might put this down as modesty, and say, no, in fact I know quite a bit and such and such. But to me this dynamic seems to expose something of the structure of knowledge in society (as I have encountered it). There are people who feel they are not so smart and that others have most things figured out, and there are those who realize they don't know very much, but somehow represent structured knowledge to the others. To me this feels like a hoax that doesn't serve either person very well.

Now, this is a caricature, and I wouldn't want to say that it says much about the actual validity of knowledge. I only point out that this is a dynamic that may complicate things when evaluating how much ground a particular area of study actually covers. Consider perhaps a set of different areas of study, each overlapping with the other in some way, and as a whole covering a large amount of ground. Suppose further that the people studying each area take the subject matter in the surrounding areas to be more solid than those actually studying them do themselves (though they may be less forthright about this aspect than they should be). How do we evaluate the total ground covered by the overlap of these different disciplines? I suppose, we need to try to gain a bit of expertise in each of the areas and ignore some of the sociology and build up our own picture of the total ground covered.

Friday, February 26, 2010

distributed consciousness

Ok, I was getting worried before about the internet and distributed consciousness.
Nice to read from Crispin about collective consciousness. Here and here.
Yes, I like to think that if there are thoughts, that those thoughts belong to someone.
In any case, I am here. I interact with many others through this funny medium, but my consciousness is associated with me body, here and now.

Tuesday, February 16, 2010

commucations theory

I recently read this article by William Deresiewicz on the history of friendship, and how social networking is changing the definition. I guess its about ideas from one area of technology, development coming into another. In this case, communications theory is being applied to human interaction. So we have nodes that are sending messages to other nodes through channels. Those channels have various properties, such as (?) latency and bandwidth. Thus, we can communicate with each other over the phone, through email, through Facebook, Twitter, Skype and other messaging services. Now, from our perspective, we are living our lives, and these are modes of communicating with others. From the system designer's perspective, we are nodes trying to communicate with each other. Face to face contact through light and sound that travel through the air becomes another channel. It is prized for its "high bandwidth".

Like each such system, the difficult part from a human perspective is that one is put on the defensive. One may be required to put ones values into this new language in order to defend them. I was complaining about Facebook recently and how many of your actions become publicly available. The response of the person I was speaking to was "don't you know about privacy settings?" There was a dismissive attitude to this. It was my job to understand the way in which Facebook had designed the system, and to address my concerns within that system. Rather than keeping things as they are, Facebook allowed more information to be shared publicly than was previously being done, and then put the burden on the user to figure out its system to reinstate those values. Its a similar situation with environmental concerns and economics. Within the domain of economics, those who don't think that species should be wiped out, or forests destroyed, or rivers polluted must phrase these goods within the language of economics.

Let me add one more point. In his article Deresiewicz discusses a Facebook friend who tells everyone that they are at Central park, and Deresiewicz asks why this person felt the need to share this. The thing about these new technologies is that they make this sharing just incredibly easy. The amount of effort to share such a thought is minimal. So, if in the end, one decides that such sharing is actually detrimental to one's relationships, then one must see these technologies as rather dangerous. Or, at least, something that requires a new kind of thought and understanding. A new kind of ledge one may fall off if one isn't careful. The new tools encourage us to externalize our internal worlds. This has much potential, and can improve self expression. But one can also give away important things, and not get much back in return. So, I note the dangers. And hopefully use this as a reminder to those developing this technology to use humility and don't expect everyone to fit into your system.

Monday, February 15, 2010

Computer science is the new physics?

I've been meeting a lot of people doing research in various topics in either computer science, machine learning, or various areas of somewhat related applied math topics. I was discussing pure vs. applied research with someone, and they were telling me that it was a very good time to be doing relatively abstract research in computer science. It was perceived that any kind of results, no matter how abstract can have practical benefit in a relatively short amount of time.

This reminds me of how I imagine it to have been for physics following the creation of the atomic bomb. The physicists were seen as miracle workers. Give these guys some money, and they will do magic with it. It seems to me like we are coming to the end of this. Physics has lost some its sway on the popular imagination. Observing the public perception of the LHC, for example, its hard not to see the last gasps of this former power. I don't think the physicists have no culpability in the impression that the LHC may create black holes to swallow up the world. Although not actively pushed, I think that it is also not actively discouraged. The problem with this, and I think the whole unification of everything via string theory falls along the same lines, is that no real result can live up to this hype. No matter what happens, it will be a let down to the public.

On the computer science front, I suppose there may be fruitful years of research ahead. What personally scares me about this is that the research is about our own imaginations, and not about the world. (But then, maybe this is just my bias, from not being very involved in it.) I do hope that this research doesn't get too far ahead of itself, and leave the world, and people behind. I suppose that physics has its control aspect as well.

A friend/colleague of mine forwarded me this link to these lectures by Hal Abelson and Gerald Jay Sussman about computer programming based around Lisp. One can only hope that some of this kind of spirit survives in the discipline of computer science. It is a joy of discovery and appreciation of simplicity and clarity that has a kindness and humanity behind it.

For me, I prefer to stick with the science. I prefer to try to understand what's already here, rather than create infrastructure to dramatically change things. And on the creation front, I prefer more modest modes of expression.

Looking back, I'm glad we have found quantum mechanics, and it is said that CERN produced the internet. I'm sorry that it has to be at the cost of a certain group of people presenting themselves as magicians.

A final comment I'd add, is that even though I phrased this in terms of "popular imagination", I think there is more to the story than this. I think that it is accurate to say that there isn't much chance that elementary particle physics will produce practical/technological benefits at this point. Certainly its possible, and one never knows the results of research. But it certainly doesn't appear that the Higgs Boson has any practical benefit, nor do the superparticles for that matter. And I don't think many particle physicists think this either. I think they just believe very strongly in reductionism, and this is the main argument as to why particle physics research should continue. I have to admit, I'm sympathetic to this line of reasoning. But I also think something has gone wrong with this approach- it has gone too far- and it will be very interesting to see what the effects of LHC research are, whatever the results!

Saturday, January 23, 2010

philosophy of science

Hmm, so my interest in Nancy Cartwright isn't so idiosyncratic after all perhaps.

Thursday, January 21, 2010

wreckage?

I look around accelerator physics and see that there are so many tried and aborted projects.
I think of this process of working on an open source project, and I see its been tried before with the UAL framework. Every idea one has has already been tried and failed.

Perhaps I should try to put a more positive spin on this, and list the open source beam physics projects. We have Zgoubi, and the previously mentioned UAL. Then there is XAL, which is an offshoot of UAL developed for the SNS. Each of these is rather project related, I believe.
UAL is an attempt at generality, but from a physics perspective seems to be more about absorbing existing codes, than developing new algorithms. It is used in the RHIC online model.

Also, I think beam physics should look to other fields such as HEP, with GEANT, and on the synchrotron optics side with codes such as shadow.