Site Meter

Thursday, November 18, 2010

slow

Today I feel frustrated. There's a sense of crystalization. Job splits into two parts- a work part and a research part. Not exactly, and its still a little vague, but it feels like its heading towards this.
You create something- a possibility?- and then you live with it. But in the process you don't pursue other options, and I find a heaviness in taking on the option that I've created. I was talking to my dad a few days ago, and describing some of my thoughts and efforts at cleaning up old messes and turning my field into something I can work with. He recommended a book he was reading about two brothers who can't throw anything away, that live in an apartment with everything they have ever collected.

Is this what I do? Just simply refuse to move on? Keep on working with an unworkable situation? In the language of the mess- is the mess simply too big? And even if its not, will there be anything interesting left after the mess is clean?

I feel like its all come to a stop. I reached a point of unstable equilibrium and just sit there, but its a very gradual slope away from this point. There are things I can work on; they are somewhat useful but not urgent. There are half-way interesting research-like questions. But this is my own internal process. Does it match expectations, and categorizations for achievement?

All this digital life continues to bother me (though I participate, such as with this blog). I continue to hear from the computer scientists about optimization and automated search and categorization. And one's activities on social networks become discrete. "So and so did this." "Now they did this." "Oh?" "Yes, they did do that." "Now so and so did this." "Really. That's great! So glad that you told me." Someone tells me about classification of human actions. They describe the "atoms" of action. Tom Waits smoking a cigarette and drinking coffee from a mug, the motion of hand away from mouth, "atomic".

Wednesday, November 10, 2010

work

Back to this theme of work vs. research, I guess I want to put a good word in for the difficulty of work. (Previously I said researchers may work harder.)

I'm trying to calculate something for a bunch of stored measurements. The formula is known, and its programmed into a matlab code. I really just have to get the parameters and plug them into the formula. What can be so hard about this?

Well, first, there is a fair amount of uncertainty in some of the parameters. It takes some work to cross check with various sources to get the parameter values reasonable. Then I need to learn how to access the data and to manage transfer of data and programs I write across a varied computing landscape. Finally, I need to choose which data to actually analyze such that the results will hopefully tell some kind of story out of which we can learn something.

A lot of these steps are the same in research. But, even though there is a lot of uncertainty, in some ways there is more certainty, because there is a research program. A set of questions to be answered. In the case of almost research, where the job is to understand something and keep it going and maybe make a few improvements, it may be even more open ended than research. Not to say I can't define some research projects within this, but there is a lot to be done that is really about gathering together data from disparate sources to understand and diagnose problems. And I wouldn't call this aspect research per se.

Tuesday, November 09, 2010

models

I enjoyed reading this paper today by Ronald Giere called "Representing with Physical Models".
Its an interesting thought that one can consider a graph or other representation of data like a 3-d image as a model in a similar sense that one has legos for a model car, or lincoln logs for a model house.

In the process of working through some problem, I often want feedback at an early stage, and so I produce some kind of plot that may partially get at what I want to say, or where I want to go, and I show it to a supervisor, or someone else. Its always an interesting process to have someone else look at your plot and take it as it is. For me, it is a termporary representation of some data I've been playing around with, but for someone else, it becomes an object contained within itself. They look at its boundary, ask about its imperfections, and describe the picture that it paints.

In the article, Giere describes representation via theoretical, physical, and computational models. His example of a theoretical model is a harmonic oscillator, his physical model example is Watson and Crick's colored balls representing DNA, and for a computational model, it is a 3-D image picture of a protein based on theoretical calculations and some protein data. He wants to say that these are all basically doing the same thing. That together with a person to do the interpreting, each of these can be acted on in various ways to learn something about the real system.

I guess this makes some sense to me. The nice thing about a toy model of something is you can play with it, get some feeling for it. You know harmonic oscillators have a fixed frequency, you can picture them oscillating in your mind, and you can even imagine the force they push against you as you try to compress the spring. Similarly with the real balls representing DNA and the 3-D image, you can play with them and relate them to things you know in the world. So with a plot you produce. Its limits and its potentialities may come alive in the viewers mind. It doesn't tell all, but it gives something concrete to hang on to to start building a picture of a given something or other you're trying to understand.

In my last post I said that physics gives us a bunch of models which have been used to describe electron storage rings (the example I focus on because I work on this, and want to clarify certain messy aspects of it). I think maybe some of the difficulty in this field is that computational approaches were developed, but somehow the last step of using them to make models didn't happen so well. One has a picture of a map with a resonance, but there's no good software to really turn this into a model where one can play with it and get a feel for it. (I suppose frequency map analysis software may qualify in this sense. One gets colorful pictures in which the resonances show up in the tune diagrams.) The concepts are there, and the software has been written (e.g. FPP) but not many people know how to use it, or how it relates to the phenomena of storage ring maps. In this context, model has usually meant the elements going into the computer code, and I suppose that's the theoretical model. But with the incoming model being very complex (so its hard to play around with in one's mind), and the software not being easy to use and visualize and relate to familiar things, one is left without good conceptual tools to understand some of these phenomena.

Monday, November 08, 2010

system, environment

So physics provides us with interpretive models. We have ways of translating things into mathematical structures. Let's take this example of the electron storage ring.
We have magnets. These are big, heavy metallic objects with current running through them, shaped in ways to produce magnetic fields. So we line these up and put them in some configuration. Now, there's a certain region of space that maps out a doughnut-type shape inside all these magnets. Physics gives us the model of a magnetic field at all places inside this doughnut.
We have devices that mesh well with this picture. They measure the field and we basically assume that at a given time, the field has some value everywhere, and one can repeat measurements and get the same value. Then, to this, we throw some matter in there. We interpret that matter in terms of point charges with various masses and charges, such as electrons or air molecules.

The magnets and magnetic field is the environment or the background. The charges now move in this background. Now, depending on the needs, one can use different descriptions of the dynamics of the electrons. One can use classical electrodynamics to describe the motion of the charges, and the electric and magnetic fields they produce that may then also act back on those charges. I'm not entirely sure the status of the self-force and consistency within classical E&M. But I think its basically understood how to deal with it.

But actually, we need a little more than classical E&M. We need a bit of quantum mechanics. The radiation the electrons give off comes in lumps, and the lumpiness actually has an important effect that we can't ignore. Without the quantum lumpiness, for an appropriately set-up storage ring, the electrons would all end up at the center of the potential. Classical E&M says there is a damping mechanism that causes this to happen. Now, the interaction between electrons would limit the size of the resulting beam to a very small, finite size. But it turns out that the quantum lumpiness causes the beam to be much larger, and together with the damping mechanism, sets the size of the electron beam.

How do we treat the lumpiness? We use quantum mechanics (is it really full-blown QED? Or some semiclassical approximation given the emission spectrum of the electron?) to provide the diffusion coefficient. This turns the Lorentz equation into a stochastic differential equation. In the case of linear dynamics and constant damping and diffusion, the result is a Gaussian probability distribution, which when considered for an ensemble of electrons results in an actual Gaussian charge distribution.

Once the magnetic field has been set, and one is considering charged particles, there are other formulations for describing the classical dynamics besides the Lorentz force law. In particular one can use Lagrangian or Hamiltonian mechanics. Let us take the lead of Michelotti in describing this framework. He begins with the pendulum to introduce model systems that have the properties we need that the maps around the storage ring will have. He emphasizes with the pendulum that the phase space may not be R^n, but is a manifold. In chapter two he introduces linear and nonlinear models. He discusses the Hopf map and the Henon map, and gives the ideas of ergodicity and some other probability concepts such as partitions. So in general, we are actually in the realm of dynamical systems. And where does Michelotti end up? By chapter 5, he is discussing perturabtion theories for Hamiltonian dynamics and tries to give description of the Forest, Berz, Irwin normal form algorithm, which may contain isolated resonances.

So this is a long path stretching away from the magnets we see and the measured fields. It provides tools. A path to walk on. But there has to be a pulling from the other end. We have to know where we want to go. I would say that one usually wants to go to questions of stability. One wants to know whether a given bunch of electrons moving through this doughnut will last very long or not. And unfortunately, the elaborate normal form perturbation theories don't tell us this. And the same goes for the numerical implementation of these perturbation theories. One can compute resonance strengths and tune shift with amplitude to arbitrary order, for a machine with all the appropriate misallignments and field errors, and the full Hamiltonian, and one still doesn't answer the stability question by the perturbation theories.

But these are nice paths. And the tools are good tools. But without some pulling from the other side, the use of these tools gets lost. One doesn't know that sometimes one needs to develop new tools, or maybe give up on full understanding and just track the particles and see what happens.
Between the end point of injection efficiency and Touschek lifetime (momentum aperture), and the beginning or magnets leading to particular paths through classical mechanics with brief borrowings/harvestings from the quantum, one will simply get lost on these paths.

Friday, November 05, 2010

hot topics

I commented again (29) at Cosmic Variance on a post about "Physicalist Anti-Reductionism" which included a debate between John Dupré and Alex Rosenberg. Sean seems to minimize the importance of the topic, finding it "the most boring argument in all of philosophy of science."
To me, it gets back to this kind of split I experienced when reading Nancy Cartwright. I found it hard to do physics when I didn't have this grand picture of it in mind, and instead having a skeptical approach. Can one be critical of something and excited about it at the same time?

But actually, to me, reading more skeptical philosophy of science is kind of like finding an honest way back to appreciating some of the stuff that originally excited me.
Maybe I'm just trying to justify choosing a not so "hot" topic in physics. Condensed matter theory, or particle theory or cosmology might have been sexier in some ways. Maybe I chose a purposefully boring topic because I thought it would be more honest.

Anyway, I was just realizing that this sense that a kind of reductionism is wrong has made me just not think very much about the components of things. Yes, there's a real sense in which we're made of molecules. And they are pretty cool. And there's a lot of them. And people make pretty pictures of them. And understanding a mechanism is pretty exciting.

The basic problem I have with accelerator physics is that try as I may, I can't put it in the same bag of exciting stuff as I've seen a lot of other topics before. Thinking about protein structure, or photosynthesis, or quantum mechanics is fun for me. But thinking about dispersion functions and chromaticity and tune shift with amplitude and momentum compaction factors... is just hard to get excited about. There were topics that originally seemed exciting. There's basically a new approach to classical mechanics that is developed in the early accelerator theory- a Lie algebra approach. Then there's the stuff with power series, whose early advocate describes in terms of differential algebras with connection to non-standard analysis. But in some sense, these mathematical abstractions are a bit overblown (particularly the latter). The reason I say they are overblown is that the problem has not even been solved. The real non-linear dynamics problem is that of the dynamic aperture (the stable region of a non-linear map) and as far as I know, this isn't really a solved problem. So going out so far into a given formalism when that formalism doesn't even solve the main problem seems a little too much.

Anyway, I'm not giving up. I like the classical mechanics. Synchrotron radiation is something I can put in the bag of exciting stuff. And the awful messy code situation may be able to slowly improve. So that's sort of the package. We've got some kind of nice classical mechanics. A bunch of somewhat useful definitions of things that are measured. A bit of a computer code and sociological infrastructure difficulty, and then some cool stuff with synchrotron radiation. Its a topic. It may be more fun to think about ecology or species of mosses, or the definition and validity of reductionism. But at least the topic is becoming less awful. Less ugly. Back away from all the extremists with their unfinished pyramids to build, and one has a topic in need of some sprucing up and simplification, but honorable nonetheless.

Wednesday, November 03, 2010

advanced light sensing

A funny quote from p. 5 of the book "Elements of Synchrotron Light" by G. Margaritondo:
As new-born babies, we begin to learn by 'seeing' things with light, which consists of electromagnetic waves. As we grow up and become more sophisticated, we can use different types of electromagnetic waves to explore different properties of the world around us: for example infrared light to study atomic-level vibrations or X-rays to study the atomic structure of molecules.
Now, I'm always looking for how to "tell the story" of synchrotron light sources. But this is an odd angle! We start our lives by seeing with visible light, and then after we become more mature (as a synchrotron light experimenter), then we add infrared and X-rays to the spectra of usable light to learn about the world!!