Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

July 9, 2004

Scandinavian but not Abelian

Posted by Urs Schreiber

Stepping out of the propeller plane on Karlstad airport, I found myself surrounded by pine forests and in an atmosphere quite unlike that on larger airports – but what I did not find was my luggage.

Karlstad airport

Apart from the obvious inconveniences this meant that a couple of papers on non-abelian 2-form fields which I had brought with me were spending the night in Copenhagen, instead of attending the conference ‘NCG and rep theory in math-phys’ with me.

Not that there weren’t plenty of other things to think about, like Schweigert’s talk on how modular tensor categories and Frobenius algebras know about open strings, as well as many very mathematical talks with categories here and functors there

commuting diagrams

but after I had given my talk on Loop space methods in string theory it turned out that several people were interested in nonabelian 2-form gauge theories, and on my way back to the hotel I had a very interesting conversation with Martin Cederwall about precisely the lost hep-th/0206130, hep-th/0207017, hep-th/0312112 which I had intended to pull out of my hat on precisely such an occasion.

But maybe I was lucky after all, because when on the next day at lunch I talked about gauge invariances in 2-form theories with Jens Fjelstad, we had to reproduce the essential formulas by ourselves on a sheet of scrap paper, instead of just looking them up, and somehow this triggered the right neurons for me, and after a nap that evening I got up and saw the light.

Karlstad center

[Update 07/15/04: The issue discussed below can now be found discussed in hep-th/0407122.]

The point is that the 2-form on target space gives rise to an ordinary 1-form connection on loop space, of course, and that I think that I know precisely how this 1-form connection looks like, because I can derive it from boundary state deformations.

In a somewhat schematical and loose fashion we can write

(1)=d+ A(B), \nabla = d + \oint_A (B) \,,

following the notation in Hofman’s paper, but including a second factor of the AA-holonomy, as I have discussed before.

Using this connection and the ordinary formula for its gauge transformations, one can check that global gauge transformations on loop space correspond to the ordinary 1-form gauge transformations

(2)AUAU +U(dU ) A \mapsto U A U^\dagger + U (dU^\dagger)
(3)BUBU B \mapsto U B U^\dagger

on target space, while local gauge transformations on loop space give rise to

(4)AA A \mapsto A
(5)BB+d Aλ B \mapsto B + d_A \lambda

up to some correction terms which don’t have a target space analogue. I have given a little more detailed discussion of this on sci.physics.strings.

As with any riddle, after having written this down it looks pretty obvious, but at least I haven’t seen this clearly before.

The question now seems to be: Can we even expect to be able to write down a theory of point particles that is local in target space and respects the above gauge symmetries. What happens to the correction terms?

Rather I’d suspect something like an OSFT which has the true loop space 2-form gauge invariance, but whose level truncated effective field theory breaks some of it. But I don’t know.

When I mentioned to Martic Cederwall that we should maybe consider YM on loop space using the field strength (d+ A(B)) 2(d + \oint_A (B))^2 he remarked that this would be a theory local in loop space, while ordinary OSFT is non-local in loop space (because the 3 ‘loops’ (or rather open intervals) involved in an interaction are not small deformations of each other and hence do not correspond to nearby points in loop space).

Well, so I don’t know what all this means. But as far as I can see nobody else does either, at least nobody understands it completely. Amitabha Lahiri kindly made me aware of a couple of paper he has on attempts to construct field theories with some reasonable 2-form gauge invariances. I will try to have a look at these papers and see if the Lagrangians considered there might be understood in terms of the loop space connection

(6)d+ A(B)= (μ,σ)( (μ,σ)+U A(0,σ)B μνX νU A(σ,0)). d + \oint_A (B) = \mathcal{E}^{\dagger (\mu,\sigma)}\left( \partial_{(\mu,\sigma)} + U_A(0,\sigma)B_{\mu\nu}X^{\prime\nu}U_A(\sigma,0) \right) \,.
Posted at July 9, 2004 8:28 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/394

116 Comments & 0 Trackbacks

Re: Scandinavian but not Abelian

Hi -

In a private email (before noticing the new SCT entry), I said…

I haven’t mastered the loop space formulation, but the idea seems really natural to me. I am sure that you are correct and people will take notice of this and your deformation stuff. An obvious question, can the process be iterated? What would be the loop space of loop space? :) Would you get non-abelian 3-form stuff? :)

You responded…

Good question. My advisor asked me the same question today at lunch. My answer
was that, yes, I think this iterates. Naively at least I can imagine the loop
space over loop space, for instance (and I seem to recall John Baez having
mentioned something like that before). This would be “torus space”, the space
of all maps from the torus into target space. It should be relevant for the
supermembrane, which indeed couples to the 3-form that appears in 11d
supergravity. There should then also be nonabelian 3-forms and so on.

But - wait - we should be discussing this at the String Coffee Table. It could
need some activity. :-)

Ok! Ok! :)

Another obvious question…

Is it possible to generalize “loop” space to “string” space that includes both open and closed strings? If you could construct a “string space of string space”, then you could talk about general maps from seemingly general 2d manifolds (branes?) into target space.

Just curious, but would even the loop space of loop space admit things more general than tori? I can almost picture a Klein bottle among other things, e.g. closed strings that twist around.

[snip of some stuff about Pohlmeyer invariants I don’t think you want me reproducing in public ;)]

I also said (in regard to the lousy communication skills of most string theorists)

I know! I see you being assimilated! :) Your notes are becoming less and less comprehensible ;)

to which you replied

Ok. So we should start a project: Rephrase everything that I think can be said
about strings in loop space formulation in a way that is understandable for
non-experts in strings. I think there is a very good chance that this is
possible. Many aspects of string theory look surprisingly natural in loop
space formulation. For instance isn’t it kind of remarkable that the concept
of the spacefilling D9 brane translates in loop space language just to the
constant 0-form on loop space? What are called ‘boundary states’ in string
theory are really just the constant 0-form acted on by some unitary operators
on loop space.

This sounds like a good idea :) Maybe we can work on this together and put something on the arxives. Then again, knowing how notoriously slow I am at writing things up, you might want to go ahead on your own. I’d be happy to at least make suggestions :) We started to do this for your string theory seminar, where I was able to write down a pretty sleek coordinate independent version of the Polyakov action. Doing so made the relation to BF-YM theory kind of obvious to me. Knowing that there are probably infinite many ways to write down an action that reduced to Nambu-Goto “on shell” caused me to lose interest :)

This loop space idea is pretty neat though. Of course, I would suggest presenting the discrete version first, which would be much simpler :) Contraction “integrals” of continuum indices become summations, which more closely resembles the usual contraction of indices.

Just a thought…

I also said

One of the things that inspired me to learn some elementary string theory was a statement you made (several times in fact) in response to complaints that string theory does not predict anything. You said that the fact that string theory does not predict anything should not be considered necessarily a bad thing because Newton’s gravitational theory doesn’t predict anything either. Not without specifying initial conditions. Finding the right vacua in string theory is like trying to find the right initial configuration of planets in the solar system. Once this is done, you can make many predictions about the subsequent orbits of the planets. This made a lot of sense to me.

You replied

Nice to hear. I get kind of frustrated hearing [people] repeat [their] claim ‘no predictions’ no matter what. In precisely the same sense field theory as such does not make any predictions. I mean, just givben the statement: “The world is described by field theoy.” does not allow you to predict anything. You instead need to pick a particular Lagrangian. String theory has the advantage that all these Lagrangians are not just there to be picked but are part of a larger framework which in principle should tell you what t choose. So in principle string theory is more predictive than field theory.

I continued…

On the other hand, I am not quite convinced that the situation is as easy as that. Before Newton could write down his gravitational theory, he first had to invent his calculus. Once you have the calculus, you need some basic physical laws, like F = dp/dt. Once you have the physical laws, THEN you can construct a physical model, e.g. the solar system.

to which you replied

I am not sure about your destinction between laws and models. But one should
certainly agree with what everybody is saying, namely that the full picture of
string theory has not yet emerged.

In this case of Newton’s gravitational theory, the “law” is the law of gravitation. The “model” is the specific placement of planets. The model is governed by the law. I don’t know how this analogy quite translates for strings.

I continued

I get the feeling that string theory has not even passed the “calculus” phase yet, i.e. the “calculus of strings” is still being developed.

To which you replied

BTW, I think Martinec used to call conformal field theory the “calculus of
string”.

Neat :)

I continued

On another note, I have actually been working hard on the discrete stuff. I don’t have a lot to show for it though :) It might be a waste of time, but I have been LaTeX’ifying Robb’s 21 postulates for axiomatizing Minkowski space. Then I will LaTeX’ify Zeeman’s postulates and finally Penrose’s. I hope to compare and contrast them. I will then try to distill out the “meaning” of what they are trying to say. I will then try to use this “meaning” to motivate the discrete approaches of Sorkin, D&MH, and our notes. Well, that is the plan anyway :)

to which you replied

Robb has 21 axioms? So many?

Yes. Although all of them rely only on the relations “before” and “after”, it does seem unnecessarily convoluted. By the way, I forgot to mention Mundy in my list. Mundy basically reformulated what Robb did in a much more concise manner using the relation of being “light-like” separated as opposed to Robb’s “time-like” relation. Another motivation for :aTeX’ifying everthing is to help me verify that Mundy actually does reproduce everything Robb does.

Gotta run!

Eric

Posted by: Eric on July 9, 2004 5:35 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Oops!

… sleek coordinate independent version of the Polyakov action.

Of course Polyakov is coordinate independent. I meant that I wrote down a sleek notation that was “coordinate free”. If coordinates do not even appear in the expression, then it is obviously coordinate independent. I prefer “coordinate free” notation whenever possible.

Eric

Posted by: Eric on July 9, 2004 5:44 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Is it possible to generalize ‘loop’ space to ‘string’ space that includes both open and closed strings?

Hm, well, er, in principle, why not? I mean, this beast certainly exists somehow. But I think an important insight is that lots of open string physics can be captured instead much more elegantly by boundary state formalism, where open string physics happens inside closed string inner products whith a closed string state on one side inserted.

Just curious, but would even the loop space of loop space admit things more general than tori? I can almost picture a Klein bottle among other things, e.g. closed strings that twist around.

Depends on how precisely you define loop space. In order for a Klein bottle to appear as a point in the loop space of loop space of target space the loop space of target space has to be that of unoriented loops, i.e. where a single point in loop space corresponds to a given loop or its orientation reverse.

Contraction ‘integrals’ of continuum indices become summations, which more closely resembles the usual contraction of indices.

That would be polygon space. Polygon space is different from a discretized loop space. But of course one could consider it, too. The problem is that it badly breaks conformal invariance on the worldsheet. I have speculated before how one could try to make sense of it anyway. I am not sure yet that there is anything interesting to be found. But maybe there is.

In this case of Newton’s gravitational theory, the ‘law’ is the law of gravitation. The ‘model’ is the specific placement of planets. The model is governed by the law. I don’t know how this analogy quite translates for strings.

So by model you mean a point in phase space, i.e. one solution of the system. Take the IKKT version of strings, as a drastic example. The law is [A μ,[A μ,A ν]]=0[A^\mu,[A_\mu,A_\nu]] = 0 and a ‘model’ in your sense is any set of 10 large matrices A μA_\mu that solve this equation.

Posted by: Urs Schreiber on July 9, 2004 6:17 PM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

BTW, concerning loop space and CFT and the ‘calculus of string’ I should emphasize that loop space formulation and usual CFT language are two sides of the same coin. Usual CFT is working in the Heisenberg pciture with worldsheet-time dependent operators (fields), whereas the loop space context uses canonical Schrödinger picture formulation of the worldsheet field theory. As always, some aspects of a theory are more easily visible in Heisenberg picture, other in Schrödinger pricture. In field theory the Schrödinger picture is usually very awkward. But I think at least on the 2d worldsheet it proves to be a useful point of view for some questions.

Posted by: Urs Schreiber on July 9, 2004 7:21 PM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

Ok. So we should start a project: Rephrase everything that I think can be said about strings in loop space formulation in a way that is understandable for non-experts in strings.

I am spending a little time thinking about this. The first thing I would suggest is that you stop calling

(1)d K=d+i K d_K = d + i_K

a “deformation of the exterior derivative.” This is not a deformation of the exterior derivative because the exterior derivative is the transpose of the boundary operator. Instead, d Kd_K is some other operator constructed from dd and i Ki_K, i.e. it is the “square root” of the Lie derivative. Since the motivation is to try to make things more transparent to the non-experts (like myself), I think it is worth some haggling over defining terminology and notation.

Another thing, why is the kinematical configuration space of bosonic strings the space of parameterized strings on target space as opposed to unparameterized strings? Why introduce parameterizations if nothing is going to depend on them?

Another thing, can you define integration of forms on loop space? Stokes’ theorem? This is how you should define the exterior derivative on loop space and not via some abstract algebraic (unmotivated) definition. I believe that Stokes’ theorem on loop space should play a central role and as far as I can see, it hasn’t even been mentioned yet (unless I missed it).

More coming soon…

Eric

Posted by: Eric on July 10, 2004 10:14 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

In http://arxiv.org/abs/hep-th/0401175 you say

Let (,g)(\mathcal{M}, g) be a pseudo-Riemannian manifold, the target space, with metric gg, and let ℒℳ\mathcal{LM} be its loop space consisting of smooth embeddings of the circle into \mathcal{M}:

(1)ℒℳ:=C (S 1,). \mathcal{LM} := C^\infty(S^1,\mathcal{M}).

To me this seems to only define the point set of ℒℳ\mathcal{LM} and says nothing about its topology. Are two points that are “close” in ℒℳ\mathcal{LM} corresponding to two loops that are “close” in \mathcal{M}? In both instances, how is “close” defined?

How is the topology of loop space defined?

Eric

Posted by: Eric on July 10, 2004 10:51 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Several years ago, I went bonkers over how beautiful loop space methods were and how I thought it was a shame that it didn’t seem to have a place in the “northern” approach to loop quantum gravity. I was rooting for the southern approach because it made use of the “loop derivative.” I did believe and I do believe that loop space methods are too beautiful not to be important for something.

Now here I am full circle approaching it again from a slightly different angle. I thought that for historical purposes, it might be entertaining to take a look at

Loop derivative

Eric

PS: For even more fun, check this out.

Posted by: Eric on July 11, 2004 3:06 AM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Walking through memory lane, i.e. rereading old s.p.r. posts, I found a url I gave to be no longer valid. Some minor tweaking turned up the current valid url

http://www-dft.ts.infn.it/~ansoldi/RedTape/Curriculum/HTML/Articles.html

In particular, I see something you (Urs) might find interesting (if you haven’t seen it)

String Propagator: A Loop Space Representation

To quote the abstract

The string quantum kernel is normally written as a functional sum over the string coordinates and the world-sheet metrics. As an alternative to this quantum field-inspired approach, we study the closed bosonic string propagation amplitude in the functional space of loop configurations. This functional theory is based entirely on the Jacobi variational formulation of quantum mechanics, without the use of a lattice approximation. The corresponding Feynman path integral is weighed by a string action which is a reparametrization invariant version of the Schild action. We show that this path integral formulation is equivalent to a functional “Schrödinger” equation defined in loop-space. Finally, for a free string, we show that the path integral and the functional wave equation are exactly solvable.

Eric

Posted by: Eric on July 11, 2004 3:19 AM | Permalink | Reply to this

Re: Scandinavian but not Abelian

String Propagator: A Loop Space Representation

I am aware of this paper. In spite of what it seems to indicate in the abstract I could never really relate it to what I am concerned with, though. Maybe my fault.

Concerning the ‘loop derivative’, I have looked at some (possibly not all) the links that you proivded, in particular the simple explanation that John baez gives here.

Seems to me that when the space of diffential forms on loop space is taken to be well-behaved enough this derivative exists and is pretty much just a Lie derivative on loop space. From what John Baez says in that message it looks like the reason this object does not exist in ‘northern LQG’ is the fact which we have discussed before in the context of the ‘LQG string’, namely that there we have non-weakly continuous reps and nonseperable Hilber spaces.

Posted by: Urs Schreiber on July 11, 2004 9:46 AM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

How is the topology of loop space defined?

The best is really to think of loop space as an \infty-dimensional manifold with coordinates {X (μ,σ)}\{X^{(\mu,\sigma)}\}. Then the natural topology is just the usual one, where an open set is given by open intervals in each of the coordinates.

Heuristically, two loops are close together if one is obtained from the other by deforming it ever so slightly.

Posted by: Urs Schreiber on July 11, 2004 9:27 AM | Permalink | Reply to this

Re: Scandinavian but not Abelian

I would suggest is that you stop calling d K=d+i Kd_K = d+ i_K a ‘deformation of the exterior derivative’

Here ‘deformation of X’ is meant in the sense of ‘a continuous 1-parameter familiy of objects such that for parameter=0 we have the original object and for parameter \neq 0 we have the deformed object’. In this sense this is a deformation, since you really should have d K=d+iTι Kd_K = d + iT \iota_K, where TT, the string’s tenstion, is the parameter, which, when turned on, deforms this operator away from the original exterior derivative.

why is the kinematical configuration space of bosonic strings the space of parameterized strings on target space as opposed to unparameterized strings? Why introduce parameterizations if nothing is going to depend on them?

Good point. The answer is: Because it is easier. The question is pretty much the same as ‘Why work with coordinates in GR if nothing is going to depend on them anyway?’ We need the coordinates to even write down the formulas which express their irrelevance! :-)

So for the string we have for instance the Polyakov action. It has a redundency, namely conformal invariance. That’s why we get constraints, which express that physical states should not have this redundancy. But the constraints hence must act on a space of states which does have the redundancy. The subspace annihilated by them is that subspace where the redundancy is gone - the physical subspace.

But the ‘deformed’ exterior derivative above is a pretty neat way to deal with this. It is not nilpotent, but it’s nilpotence is restored when restricted to the rep-invariant subspace.

So this is somewhat similar to your construction of chains from path algebras. Paths, on which the ‘boundary operator’ is not nilpotent restrict to chains, on which it is.

Another thing, can you define integration of forms on loop space?

Yes. The Hodge inner product on forms over loop space is essentially (up to a certain switch of sign in the 0-modes) the inner product on the superstring’s Hilbert space. A state in the string’s Hilbert space is a (inhomogeneous) differential form on loop space.

Stokes’ theorem?

In principle, yes. I think I haven’t come across its analogue in the CFT language yet, but it should be there.

Posted by: Urs Schreiber on July 11, 2004 9:21 AM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

Hi Urs,

I think I am going to put a little more effort into trying to convince you to consider changing terminology about deformed exterior derivatives.

As we discussed here, there are at least three profound ways to “deform” the exterior derivative:

(1)1.)d+A 1.)\quad d + A
(2)2.)d+i X 2.)\quad d + i_X
(3)3.)d+d . 3.)\quad d + d^\dagger.

The first one is the “square root” of the curvature, the second is the square root of the Lie derivative, and the third is the square root of the Lapace-Beltrami operator. In each case, we could append a factor TT to the second term that we could, in principle, let go to zero and recover the usual exterior derivative. In this sense, we could consider each one to be a deformation of the exterior derivative. However, I would argue that this belies the geometrical meaning behind each one.

Would it be so controversial to think of some name besides “deformed exterior derivative” for d+i Xd + i_X? The first item is called the “covariant exterior derivative”. I am not too thrilled about this name either, but at least it makes it clear that it is geometrically different than the exterior derivative. The third item is called the Dirac-Kaehler operator, which I am perfectly happy with. I think I recall you referring to d+i Xd + i_X as the susy generator. Is that true? Could we call the second item the “susy generator” or something instead of deformed exterior derivative?

Ah ha! :) In your paper

On deformations of 2d SCFTs

you have a footnote referring to the equation

(4)d Ke Wd Ke W,d K e W d K e W (1.2) d_K \to e^{-W} d_K e^{W},\quad d^\dagger_K \to e^{W^\dagger} d^\dagger_K e^{-W^\dagger}\quad\quad\quad\quad (1.2)

that says

Throughout this paper we use the term “deformation” to mean the operation (1.2) on the superconformal generators, the precise definition of which is given in §3.2 (p.15). These “deformations” are actually isomorphisms of the superconformal algebra, but affect its representations in terms of operators on the exterior bundle over loop space.

Not to mention the fact that you give a precise meaning to the word “deformation” for which d+i Xd + i_X is not one, it looks like you are referring to d Kd_K as a “superconformal generator”. Why can’t we just use this terminology for d Kd_K?

If we refer to d Kd_K as a deformation of dd, then we need to refer to e Wd Ke We^{-W} d_K e^W as a “deformation of a deformation of d.” I understand that it is not incorrect to refer to d Kd_K as a deformation of dd, but if our goal is to make things clearer, should we try to avoid clumsy phrases like the above? Then again, I am not too sure I am fond of the term “superconformal generator” either because it is so scary :) Is there a less intimidating term we can use? Something like “the square root of the Lie derivative”? :)

Another reason why I would like to try to keep the exterior derivative as a more sacred operator is that I am enamored by the beauty of and profoundness of the generalized Stokes’ theorem. We should leave this temple unspoiled if we can :)

If we are going to deform dd, then we had better deform the boundary map \partial as well. Although we could, in principle, deform \partial in a way to have a deformed Stokes’ theorem, I don’t think anyone would think this to be a natural thing to do.

Random thought…

I wonder if there is some natural operator H X:C pC p+1H_X: C_p \to C_{p+1} such that

(5) Si Xα= H XSα. \int_S i_X \alpha = \int_{H_X S} \alpha.

I can imagine XX determining a flow which sweeps the pp-chain SS along forming a (p+1)(p+1)-dimensional chain. If there was such a thing, then maybe it wouldn’t be so unnatural to define

(6) K=+H K \partial_K = \partial + H_K

so that

(7) Sd Kα= KSα. \int_S d_K \alpha = \int_{\partial_K S} \alpha.

Hmm…

Anyway…

Stokes’ theorem?

In principle, yes. I think I haven’t come across its analogue in the CFT language yet, but it should be there.

If this is a hole, I think it is a significant hole and maybe we should fill it.

Eric

Posted by: Eric on July 11, 2004 3:54 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Hi again,

I wonder if there is some natural operator H X:C pC p+1H_X :C_p\to C_{p+1} such that

(1) Si Xα= H XSα. \int_S i_X \alpha = \int_{H_X S} \alpha.

I can imagine XX determining a flow which sweeps the pp-chain SS along forming a (p+1)(p+1)-dimensional chain.

I get the strong feeling I am reinventing old well-known material here, but this is kind of interesting :)

Given a pp-chain SS and a (smooth) vector field XX, then the flow generated by XX will drag SS along sweeping out a (p+1)(p+1) chain. Let ϕ(t):\phi(t): \mathcal{M}\to\mathcal{M} be the flow and let ϕ(t) *S\phi(t)_* S denote the pp-chain SS carried along the flow to time tt. Then define

(2)H X(t)S= 0ttϕ(t) *S H_X(t) S = \bigcup_{0\le t'\le t} \phi(t')_* S

to be the (p+1)(p+1)-chain obtained by sweeping SS along XX for a time tt.

I could very well be (and probably am) wrong, but it looks like we have

(3) Si Xα=ddt[ H X(t)Sα]| t=0. \int_S i_X\alpha = \frac{d}{dt} \left. \left[\int_{H_X(t) S} \alpha \right]\right|_{t = 0}.

This feels right. If it is, that would be pretty neat.

It seems to be right for the case where α=df\alpha = df is a 1-form, i.e. for the left-hand side we have

(4) pi Xdf=(i Xdf)| p=df(X)| p, \int_p i_X df = \left. \left(i_X df \right) \right|_p = \left. df(X) \right|_p,

which is the directional derivative of ff along XX evaluated at point pp. For the right-hand side we have

(5)ddt[ H X(t)pdf]| t=0=ddt[ H X(t)pf]| t=0=ddt[f(ϕ(t) *p)f(p)], \frac{d}{dt} \left. \left[ \int_{H_X(t) p} df \right] \right|_{t=0} = \frac{d}{dt} \left. \left[ \int_{\partial H_X(t) p} f \right] \right|_{t=0} = \frac{d}{dt} \left[ f(\phi(t)_* p) - f(p) \right],

which is also the directional derivative of ff along XX evaluated at pp. So it works for this case, but this might be too special because i Xdf= Xfi_X df = \mathcal{L}_X f.

I think the case when α\alpha is exact is covered in Frankel (I’ll need to check), but I don’t recall seeing this relation

(6) Si Xα=ddt[ H X(t)Sα]| t=0. \int_S i_X\alpha = \frac{d}{dt} \left. \left[\int_{H_X(t) S} \alpha \right]\right|_{t = 0}.

for α\alpha not exact.

Eric

Posted by: Eric on July 11, 2004 4:41 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

I think the case when α\alpha is exact is covered in Frankel (I’ll need to check), but I don’t recall seeing this relation

(1) Si Xα=ddt[ H X(t)Sα]| t=0 \int_S i_X\alpha = \frac{d}{dt} \left [\int_{H_X(t) S} \alpha ] \right|_{t = 0}

for α not exact.

Ok. I just checked Frankel and he has the expression

(2)ddt[ S(t)α]| t=t= S(t) Xα, \frac{d}{dt'} \left [\int_{S(t')} \alpha] \right|_{t' = t} = \int_{S(t)} \mathcal{L}_X \alpha,

where S(t)=ϕ(t) *SS(t) = \phi(t)_* S. This is pretty much different from what I am proposing in the quote above. However, I did check to make sure that the two expressions are compatible and they are (or seem to be).

From the expression in Frankel, we have

(3)ddt[ S(t)α]| t=t= S(t) Xα= S(t)(i Xdα+di Xα)= S(t)i Xdα+ S(t)i Xα. \frac{d}{dt'} \left [\int_{S(t')} \alpha] \right|_{t' = t} = \int_{S(t)} \mathcal{L}_X \alpha = \int_{S(t)} (i_X d\alpha + d i_X \alpha) = \int_{S(t)} i_X d\alpha + \int_{\partial S(t)} i_X \alpha.

Now considering the first of the two terms on the right-hand side above we have (using my expression)

(4) S(t)i Xdα=ddt[ H X(t)S(t)dα]| t=t=ddt[ [H X(t)S(t)]α]| t=t \int_{S(t)} i_X d\alpha = \frac{d}{dt'} \left [\int_{H_X(t') S(t)} d\alpha] \right|_{t' = t} = \frac{d}{dt'} \left [\int_{\partial[H_X(t') S(t)]} \alpha] \right|_{t' = t}

and considering the second term we have

(5) S(t)i Xα=ddt[ H X(t)S(t)α]| t=t. \int_{\partial S(t)} i_X \alpha = \frac{d}{dt'} \left [\int_{H_X(t') \partial S(t)} \alpha] \right|_{t' = t}.

It seems kind of miraculous the way it works out, but unless I made a mistake, it appears that (while keeping track of orientation) we have

(6)[H X(t)S(t)]+H X(t)S(t)=S(t)S(t) \partial[H_X(t') S(t)] + H_X(t') \partial S(t) = S(t') - S(t)

so that

(7)ddt[ [H X(t)S(t)]+H X(t)S(t)α]| t=t=ddt[ S(t)S(t)α]| t=t=ddt[ S(t)α]| t=t \frac{d}{dt'} \left [\int_{\partial[H_X(t') S(t)] + H_X(t') \partial S(t)} \alpha] \right|_{t' = t} = \frac{d}{dt'} \left [\int_{S(t') - S(t)} \alpha] \right|_{t' = t} = \frac{d}{dt'} \left [\int_{S(t')} \alpha] \right|_{t' = t}

as it should :)

This gives me a boost of confidence that perhaps my expression is not too far off (and might actually be correct).

Recall that the reason I am interested in this in the first place is that I am trying to see if there is some natural way to extend Stokes’ theorem into something that might be called a “super Stokes’ theorem” :) I really like the sound of that :) The super Stokes’ theorem should look something like

(8) Sd Kα= KSα. \int_S d_K \alpha = \int_{\partial_K S} \alpha.

From the above, it appears that this is not possible in the continuum. However, there certainly is a natural extension of this to the discrete theory. Yet another reason why the discrete theory is superior to the continuum ;)

I claim that in the discrete theory that you can certainly write down a very natural “super Stokes’ theorem”, where the word “super” is not meant to mean “great”, it relates to supersymmetry :)

Eric

Posted by: Eric on July 11, 2004 6:46 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

By the way, if any of this can be made to make sense and their is a “super Stokes’ theorem”, then this justified a proposal to call d Kd_K the “super exterior derivative” and K\partial_K the “super boundary” :)

Just a thought :)

This way we can refer to

(1)d+A+i K d + A + i_K

as the “covariant super exterior derivative.” Phew! What a mouthful :)

Eric

Posted by: Eric on July 11, 2004 6:56 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Would it be so controversial to think of some name besides ‘deformed exterior derivative’ for d+ι Xd +\iota_X ?

You certainly have a point, since in any case the deformation here is different from these other deformations - at least on the surface of it. (But there is a subtle relation: Actually it is possible to get d+iι Kd + i \iota_K from similarity transformations and taking some linear combinations from dd alone. This is the content of equations (701) and (702) of this. But I so far haven’t managed to make any good use of this.)

I wonder if there is some natural operator H X:C pC p+1H_X :C_p \to C_{p+1} such that

(1) Sι Xα= H XSα. \int_S \iota_X \alpha = \int_{H_X S} \alpha\,.

I don’t know if this works for general vector fields, or for general Killing vector fields, XX, but I think something like that should work for X=KX=K the reparametrization Killing vector on loop space.

The resaon is essentially that ι K\iota_K is nothing but the T-dual of dd!

I give the explanation of that weird sounding statement in that deformation paper. The point is that under the exchange of form creator with annihilators and of partial derivatives with multiplication by X X^\prime, the canonical supercommutation relation remain intact. So algebraically there is little difference between dd and ι K\iota_K and as dd generates a differential calculus based on the 0-form, ι K\iota_K should generate a differential calculus based on the top form.

You write:

(2)H X(t)S= 0t tϕ(t ) *S H_X(t) S = \bigcup_{0 \leq t^\prime \leq t} \phi(t^\prime)_* S

to be a (p+1p+1)-chain

Wait, now I am confused: Are you claiming that a linear combination of pp-chains can be a p+1p+1-chain? I don’t see how this should work, even in the discrete case, but maybe I am misunderstanding your notation.

As I said above, I would expect that the best way to get something like Stokes for ι K\iota_K would be to regard this operator as another exterior derivative, a T-dual one, and devise a T-dual Stokes law for it.

Posted by: Urs Schreiber on July 12, 2004 10:14 AM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

Too bad! :)

I was so looking forward to hearing what you had to say that I almost couldn’t sleep last night. I rush in (half dressed) to check SCT this morning and this is all you have to say?!?!? :)

You write:

(1)H X(t)S= 0ttϕ(t) *S H_X(t)S = \bigcup_{0\le t'\le t} \phi(t')_* S

to be a (p+1)(p+1)-chain

Wait, now I am confused: Are you claiming that a linear combination of pp-chains can be a p+1p+1 -chain? I don’t see how this should work, even in the discrete case, but maybe I am misunderstanding your notation.

The notation could probably use some work, but think of what I am saying. We have a pp-chain SS and a vector field XX that determines a 1-parameter flow ϕ(t):\phi(t):\mathcal{M}\to\mathcal{M}. Following Frankel, we can define

(2)S(t)=ϕ(t) *S S(t) = \phi(t)_* S

to be the pp-chain SS carried along XX for a time tt. The pp-chain SS is going to “sweep out” a (p+1)(p+1)-dimensional region as it gets carried along XX. The region is going to be time dependent and I denote the corresponding (p+1)(p+1)-chain by

(3)H X(t)S. H_X(t) S.

There is a little more to the story because a chain is more than just a point set, but involves orientation as well. This is not too difficult to work out. In the continuum, I found that I could not construct some h X:C pC p+1h_X: C_p\to C_{p+1} that satisfies

(4) Si Xα= h XSα. \int_S i_X \alpha = \int_{h_X S} \alpha.

The best I could do in the continuum was (the closely related version)

(5) Si Xα=ddt[ H X(t)Sα]| t=0. \int_S i_X \alpha = \frac{d}{dt} \left [\int_{H_X(t) S} \alpha] \right|_{t = 0}.

This is the best you can do using standard continuum differential geometry because there, you think of X pX_p as being a tangent vector at a point pp, whereas a tangent vector should really be associated with a little infinitessimal line segment. If instand of standard differential geometry, you were dealing with synthetic differential geometry that does treat little infinitessimal line segments, then you can get rid of the time derivative out front. Since little line segments are fundament in the discrete theory, things work out perfectly natural and you obviously do not need a time derivative out front (meaning you can have a “T-dual” Stokes theorem (I need to read you notes because I have no idea what “T-dual” means, but what you say sounds very interesting, i.e. i Xi_X is “T-dual” to dd.)).

Without reading your notes yet, it seems that if you can get a T-dual Stokes theorem, then it might mean that T-dual differential geometry is more closely related to synthetic differential geometry than to standard differential geometry. That would be interesting if true :)

Eric

PS: This

(6) Si Xα=ddt[ H X(t)Sα]| t=0. \int_S i_X \alpha = \frac{d}{dt} \left [\int_{H_X(t) S} \alpha] \right|_{t = 0}.

is NEAT!!! I am pretty sure it is correct too.

Posted by: Eric on July 12, 2004 1:11 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Too bad! :)

Oops, you are right. I wasn’t paying attention closely enough.

To my defence I could cite that my brain was really occupied with another calculation concerning nonablian loop space connections, as well as with some responses that I got on my latest spr post (unfortunately only by private email). But I shouldn’t.

You are right, the formula you give seems to be correct. It becomes pretty obvious when one picks some Cartesian set of coordinates and everything else rectangular, too.

For instance if α=f(x)dx 1dx 2\alpha = f(x) dx^1 \wedge dx^2 and X= x 2X = \partial_{x^2} and S={x 2=0}S = \{x^2 = 0\}, then H X(t)={0x 2t}H_X(t) = \{0 \leq x^2 \leq t\} and ι Xα=f(x)dx 1\iota_X \alpha = -f(x) dx^1 and we get

(1) Sι Xα= f(x)dx 1 \int_S \iota_X \alpha = \int_{-\infty}^\infty f(x) dx^1

and

(2) H X(t)Sα= 0 tdx 2 dx 1f(x). \int_{H_X(t)S}\alpha = - \int_0^{t} dx^2\, \int_{-\infty}^\infty dx^1\, f(x) \,.

This very manifestly satisfies your formula. And if it works for little cubes then, because we are physicists, it works for everything.

Yeah, cool. This formula looks like it should have been known for ages, but I, too, cannot remember having seen it stated explicitly anywhere.

I think if we allow ourselfs to be a little cavalier with notation then we can even put it in a form that is very suggestive of the application that you have in mind, namely I suggest writing

(3) Sι Xα= H X (0)Sα. \int_S \iota_X \alpha = \int_{H^\prime_X\left(0\right)S} \alpha \,.

Then it would indeed be possible to write down the ‘super Stokes’ theorem’ as

(4) S(d+ι X)α= H X (0)S+Sα. \int_S (d + \iota_X)\alpha = \int_{H^\prime_X\left(0\right)S + \partial S} \alpha \,.

Yes, good, I like that. Is there anything we can say about the chain H X (0)S+S H^\prime_X\left(0\right)S + \partial S ?

Seems to be something like the infinitely tight ‘wrapping’ of SS.

BTW, I think I made some progress with nonabelian connections on loop space. I found that one should maybe first concentrate on connections which are flat on loop space, i.e. which assign the identity group element to every (contractible) closed curve in loop space, i.e. to every torus.

This is not quite as trivial as it may sound. Indeed, for a loop space connection to be flat both the 1-form AA and the 2-form BB are generically non-flat by themselves.

Moreover, for a true boundary effect in string theory, i.e. a scenario where all the nontrivial background is really living on the brane and coupled only to ends of open strings, the flat loop space connection is precisely what we need and want.

As far as I can see everybody including me is pretty much in the dark concerning the physical interpretation of nonablian 2-form backgrounds in string theory, but what I just wrote makes a lot of sense to me, now that I think about it. In particular it removes the confusion how anything nonablian could couple to a closed string. (Since, as you may have heard, in string theory non-abelianness comes from open strings that attach to several D-branes. The nonablian N×NN\times N matrices are essentially coincidence matrices describing which end of which string ends on which of the NN branes.)

So maybe flat nonablian connections on loop space is precisely what we should really be looking for. And I think this case is non-trivial and maybe only here all the desired properties hold.

If I find the time I’ll write that up in more detail.

Posted by: Urs Schreiber on July 12, 2004 3:26 PM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

Too bad! :)

Oops, you are right. I wasn’t paying attention closely enough.

Don’t worry about it :) I know you’ve got a million things on your mind. I almost feel guilty for bugging you with this stuff (note: I only said “almost” :)).

I like the new (?) way to view i Xi_X very much and the neat geometrical interpretation should carry over to loop space very naturally, which I think may help significantly in making everything more understandeable. Especially considering the somewhat prominent role of i Xi_X and X\mathcal{L}_X on loop space.

Yes, good, I like that. Is there anything we can say about the chain H X(0)S+SH_X'(0)S+\partial S?

Well, since it is hard to picture H X(0)H_X'(0), I would suggest to instead consider the visualizable chain

(1)H X(t)S+S H_X(t) S + \partial S

for some finite tt. We can even try to understand the operator

(2) X(t)=H X(t)+. \partial_X(t) = H_X(t) + \partial.

The first thing to note is the important property

(3)H X(t) 2=H X(t)H X(t)=0. H_X(t)^2 = H_X(t) H_X(t) = 0.

The operator H X(t)H_X(t) has the nice geometrical picture of sweeping a pp-chain SS along forming a (p+1)(p+1)-chain. If you sweep a pp-chain once and then sweep it again, you get a degenerate (p+2)(p+2)-chain, the integral over which will always vanish.

The neat thing about X(t)\partial_X(t) is that it squares to

(4) X(t) 2=H X(t)+H X(t)=ϕ *(t)1, \partial_X(t)^2 = H_X(t) \partial + \partial H_X(t) = \phi_*(t) - 1,

i.e.

(5) X(t) 2S=S(t)S. \partial_X(t)^2 S = S(t) - S.

This gives a beautiful interpretation of the Lie derivative and in fact is the transpose of the Lie derivative once you put a d/dtd/dt out in front of the integral.

On loop space, reparameterization invariance means that

(6)ϕ(t) *S=S \phi(t)_* S = S

where ϕ(t)\phi(t) is the flow generated by sweeping points around the closed string. Therefore, it seems you could similarly state parameterization invariance via

(7) K(t) 2=0. \partial_K(t)^2 = 0.

Neat, huh? :)

BTW, I think I made some progress with nonabelian connections on loop space.

Nice to hear. I bet you wish you could clone yourself now more than ever :)

Remember, these are the best years of your life so no complaining ;)

Eric

Posted by: Eric on July 12, 2004 4:02 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Since things are a little easier to understanding using some finite tt, I just got the idea to define some operator

(1)i X(t):Ω pΩ p1 i_X(t): \Omega^{p}\to \Omega^{p-1}

via

(2) Si X(t)α:= H X(t)Sα. \int_S i_X(t) \alpha := \int_{H_X(t) S} \alpha.

Then we could define operators

(3)d X(t)=d+i X(t) d_X(t) = d + i_X(t)

and

(4) X(t)=d X(t) 2=di X(t)+i X(t)d. \mathcal{L}_X(t) = d_X(t)^2 = d i_X(t) + i_X(t) d.

Using your “cavalier” notation, then the regular Lie derivative is actually

(5) X= X(0). \mathcal{L}_X = \mathcal{L}'_X(0).

Fun fun :)

Eric

Posted by: Eric on July 12, 2004 4:13 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

In an email to Urs, I said

You mentioned that i_X was somehow “T-dual” to d. Do you know where I might be able to find a discussion about this?

to which you replied

You will find the discussion as soon as you ask about it in the SCT! :-)

Ok. Here you go! :)

Eric

Posted by: Eric on July 15, 2004 3:57 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

The answer to this question is given in section 4.2.1 of hep-th/0401175, and in particular in equation (4.12).

The canonical supercommutation relations on loop space are

(1)[ (μ,σ)X (ν,κ)]=δ μ νδ (σκ) [\partial_{(\mu,\sigma)} X^{\prime (\nu,\kappa)}] = - \delta_\mu^\nu \delta^\prime(\sigma-\kappa)

and

(2)[ (μ,σ), (ν,κ)]=δ ν μδ(σκ). [\mathcal{E}^{\dagger (\mu,\sigma)}, \mathcal{E}_{(\nu,\kappa)}] = \delta_\nu^\mu \delta(\sigma-\kappa) \,.

These canonical commutation relations are preserved under the exchange

(3) (μ,σ)X (μ,σ) \partial_{(\mu,\sigma)} \leftrightarrow X^{\prime (\mu,\sigma)}
(4) (μ,σ) (μ,σ). \mathcal{E}^{\dagger (\mu,\sigma)} \leftrightarrow \mathcal{E}_{(\mu,\sigma)} \,.

Physically this corresponds to exchanging the canonical momentum at each point of the string with its ‘winding’ excitation X =ddσX.X^\prime = \frac{d}{d\sigma} X. Since the canonical brackets are preserved (as long as the 0-mode of XX is not involved), this operation preserves the constraint algebra of the string and hence maps consistent string backgrounds to consistent string backgrounds. This duality is known as TT-duality. In the above mentioned paper I demonstrate how the usual facts about T-fuality - plus a little more - can be deduced from this algebra isomorphism.

Ok, so this answers your question: The exterior derivative (μ,σ) (μ,σ)\mathcal{E}^{\dagger (\mu,\sigma)}\partial_{(\mu,\sigma)} on loop space and the operator (μ,σ)X (μ,σ)\mathcal{E}_{(\mu,\sigma)}X^{\prime (\mu,\sigma)} of interior multiplication with the reparameterization Killing vector are interchanged under T-duality

(5) (μ,σ) (μ,σ) (μ,σ)X (μ,σ). \mathcal{E}^{\dagger (\mu,\sigma)}\partial_{(\mu,\sigma)} \leftrightarrow \mathcal{E}_{(\mu,\sigma)}X^{\prime (\mu,\sigma)} \,.

A subtlety is that the above isomorphism does not work as soon as the undifferentiated coordinate field plays a role. This is related to the fact that you can T-dualize only (as far as I know) along Killing directions in target space. Namely in these cases you can find coordinates so that the target space metric is independent of the dualized directions, so that no XX along these directiosn appears in the string’s constraints.

Posted by: Urs Schreiber on July 15, 2004 4:35 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Hmm…

Does this have any meaning for target space? I mean, what if dd and i Xi_X are defined on target space? Is there some duality

(1)di X d \leftrightarrow i_X

even in this case or do we have to go to loop space to see it?

Sorry if it is obvious :)

Eric

Posted by: Eric on July 15, 2004 5:57 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Well, if you forget about loop space you can note that all I really used is that X (μ,σ)X^{\prime (\mu,\sigma)} is a Killing vector on loop space, which is not trivially a constant vector field.

So assume in more generality that there is a covariant derivative \nabla on some manifold and a Killing vector vv, so that (if application of the derivative is written as commutation with the respective operator)

(1)[^ μ,v ν]+[^ ν,v μ]=0. [\hat \nabla_\mu, v_\nu] + [\hat \nabla_\nu, v_\mu] = 0 \,.

This implies that the bracket

[^ μ,v ν] [\hat \nabla_\mu, v_\nu]

is invariant under the exchange μv μ\nabla_\mu \leftrightarrow v_\mu, because

(2)[^ μ,v ν][v μ,^ ν]=[^ ν,v μ]=+[^ μ,v ν], [\hat \nabla_\mu, v_\nu] \rightarrow [v_\mu, \hat \nabla_\nu] = - [\hat \nabla_\nu, v_\mu] = + [\hat \nabla_\mu, v_\nu] \,,

by the condition that vv is Killing.

So that’s formally what is going on, and it has as such nothing to do with loop space.

However, I would not know what this exchange means when the space it is used on is not loop space.

But maybe you can figure it out… :-)

Posted by: Urs Schreiber on July 15, 2004 6:14 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Is this anything like saying

(1) X=[d,i X] \mathcal{L}_X = [d,i_X]

is invariant under the change di Xd\leftrightarrow i_X? I doubt it because this is true regardless of whether XX is Killing.

I know I’m being dense (I feel dense at the moment), but I don’t see how invariance under

(2) μX μ \nabla_\mu \leftrightarrow X_\mu

implies (or is related to)

(3)di X. d \leftrightarrow i_X.

In what sense is the interchange a duality.

Sorry :)

Eric

Posted by: Eric on July 15, 2004 7:21 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

It is true that X\mathcal{L}_X is invariant under this transformation, but that’s just a subcase, so it is no contradiction that here this is true for all XX.

To see the T-dualness of dd and ι X\iota_X just do the substitutions which I had given.

In the expressions

(1)d= (μ,σ) (μ,σ) d = \mathcal{E}^{\dagger (\mu,\sigma)} \partial_{(\mu,\sigma)}

and

(2)ι K= (μ,σ)X (μ,σ) \iota_K = \mathcal{E}_{(\mu,\sigma)} X^{\prime(\mu,\sigma)}

substitute, literally, X (μ,σ)X^{\prime (\mu,\sigma)} for (μ,σ)\partial_{(\mu,\sigma)} and vice versa, as well as (μ,σ)\mathcal{E}^{\dagger (\mu,\sigma)} for (μ,σ)\mathcal{E}_{(\mu,\sigma)} and vice versa.

Posted by: Urs Schreiber on July 15, 2004 8:16 PM | Permalink | Reply to this

Re: Scandinavian but not Abelian

Hi Urs,

I am extremely distracted by other things at the moment, but I’d still like to try to understand this. Is there some obvious way to express this duality (on target space) in a “coordinate free” manner?

I smell some kind of neat geometrical trick lying somewhere just under the surface, but can’t put my finger on it (yet).

Eric

Posted by: Eric on July 16, 2004 3:18 AM | Permalink | Reply to this

Re: Scandinavian but not Abelian

You are right that coordinates played an unduely crucial role in what I said. I haven’t thought much about coordinate-free relaization. The most invariant statement I know is that T-duality can be done every Killing vector of target space. But it should be possible to improve on that.

Posted by: Urs Schreiber on July 16, 2004 9:13 AM | Permalink | Reply to this

T-duality

Hi Eric -

here is one way that maybe makes the geometric interpretation more manifest:

Concentrate on the fermionic part of T-duality for the moment. The duality simply exchanges ONB form creators σ a\sigma^a \wedge with ONB form annihilator ι σ a\iota_{\sigma^a}.

The creation and annihilation operators generate an algebra isomorphic to two Clifford algebras with generators

(1)γ ± a:=σ a±ι σ a. \gamma_\pm^a := \sigma^a \wedge \pm \iota_{\sigma^a} \,.

You see that T-duality acts on these as

(2)γ + aγ + a \gamma_+^a \mapsto \gamma_+^a
(3)γ aγ a. \gamma_-^a \mapsto - \gamma_-^a \,.

This manifestly preserves the Clifford algebra. In fact, we can regard this as an orthogonal transformation in O(D,D)O(D,D), where D is the manifold’s dimension.

The interesting thing is, that no matter what signature the manifold has, the above γ ± a\gamma_\pm^a generators always generate Cl(D,D)\mathrm{Cl}(D,D). T-dulities are just the O(D,D)O(D,D) transformations which acto on this doubled Clifford algebra, leaving it invariant.

The point about loop space is that there you naturally have bosonic analogs of the above fermionc construction.

Posted by: Urs Schreiber on July 16, 2004 2:15 PM | Permalink | PGP Sig | Reply to this

Re: Scandinavian but not Abelian

Hello,

I have seen this paper before and we probably even discussed it once upon a time, but here it is again

Yang-Mills Theory on Loop Space
S. G. Rajeev

We will describe some mathematical ideas of K. T. Chen on calculus on loop spaces. They seem useful to understand non-abelian Yang–Mills theories.

In it, the author makes the curious statement

The set of loops on space-time is an infnite dimensional space; calculus on such spaces is in its infancy. It is too early to have rigorous definitions of continuity and differentiablity of such functions. Indeed most of the work in that direction is of no value in actually solving problems of interest (rather than in showing that the solution exists.)

Now, you are throwing around all the tools of differential geometry on loop space as if it was something trivial. Is it possible that, almost as an afterthought, you came up with a legitimate differential geometry on loop space that has been apparently elluding understanding for ages? :)

If that is true, then I think it is a BIG deal :) So far you have been treating it as a mere tool to study your deformation theory, but it seems that differential geometry on loop space is something that a lot of people would find interesting. I would suggest that you maybe consider writing up a quick preprint called “Differential Geometry on Loop Space” or “Loop Space Differential Geometry” or something like that (instead of having the material stuffed away in an appendix in some other paper). Am I getting excited about nothing or is your idea more significant than either of us have realized so far :)

Come to think of it, writing up this report on differential geometry on loop space is what you already suggested, but I think maybe it should take a higher priority than I originally thought. As a warmup for that, I’ve been working on regular old differential geometry. I am very happy to have (re?)discovered this geometrical definition of the interior product i Xi_X in terms of sweeping a chain along the flow generated by XX. I haven’t seen this before, but it is beautiful. All textbooks on differential geometry should present i Xi_X this way. Maybe we should write a book on differential geometry :) That is what I was trying to do in my PhD thesis, but after 6 years and 350 pages, they finally kicked me out and made me graduate. I should finish that up one of these days :)

Eric

PS: This paper is also interesting

The Extended Loop Group: An Infinite Dimensional Manifold Associated with the Loop Space
Cayetano Di Bartolo, Rodolfo Gambini, Jorge Griego

A set of coordinates in the non parametric loop-space is introduced. We show that these coordinates transform under infinite dimensional linear representations of the diffeomorphism group. An extension of the group of loops in terms of these objects is proposed. The enlarged group behaves locally as an infinite dimensional Lie group. Ordinary loops form a subgroup of this group. The algebraic properties of this new mathematical structure are analized in detail. Applications of the formalism to field theory, quantum gravity and knot theory are considered.

and I’m trying to see the relation, if any, to your stuff.

Posted by: Eric on July 12, 2004 4:36 AM | Permalink | Reply to this

Re: Scandinavian but not Abelian

calculus on such spaces is in its infancy

Mathematicians will naturally have a different attitude towards this stuff. I have cited a couple of mathematical paper that do work on loop space and whose techniques and results are compatible with the maybe naive relations. My guiding principle is that I know that I am just doing CFT in a different picutre. This for instance tells me (and it can be checked) that the naive computations work when the background (the metric on loop space for instance) satisfies the string’s background field equations of motion.

All I say about loop space can equivalently be rephrased in more ordinary string Hilbert space language. I would tend to think that all problems that one would encounter in rigorously defining calculu on loop space can be understood in terms of opertor ordering effects and related divergences in this string Hilbert space language. But I agree that it would be desirable to have formulations that bridge between mathematical and physical perspectives on loop space.

Posted by: Urs Schreiber on July 12, 2004 10:29 AM | Permalink | PGP Sig | Reply to this

Calculus on Loop Space

Eric had cited S. G. Rajeev as writing in hep-th/0401215

calculus on such spaces is in its infancy

I had replied, essentially, that I don’t care. :-)

But is there need to be worried?

In that paper cited above Rajeev summarizes some basic elements of the approach to loop space calculus by the mathematician K.-T. Chen who apparently has developed most of what is known about calculus on loop space, using his method of ‘iterated intgrals’.

The question is: How is that compared to the apparently more naive approach that I give here?

The answer is: It is precisely the same, up to the fact that Chen makes explicit the notion of ‘loop space functions as formal power series’.

What I mean by this is that if you take the ‘naive’ notion of loop space calculus and apply it to functions on loop space of the form as in equation (14) of the above paper, which have the form of ‘iterated integrals’ (I’d rather call them path-ordered integrals), then the definition of the product in (20) as well as the definition of the exterior derivative in (24) follows. The first fact is obvious, the second second is the content of equation (3.8) in my hep-th/0407122, which is a review of a paper by Getzler, Jones and Petrack - which again probably goes back to Chen himself, who presumeably used this calculation to motivate his definition.

So it seems to me that all that would be necessary to make what I have written about loop space so far rigorous and compatiblee with Chen is to say that the function space on loop space that I am considering is that of formal power series in these ‘iterated integrals’.

Posted by: Urs Schreiber on July 16, 2004 7:07 PM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

Hi Urs,

The question is: How is that compared to the apparently more naive approach that I give here?

I am a little confused because here you say

Yes, if we are thinking of the configuration space of string then it is

- unbased

- parameterized

- consisting of functions from the circle into target space that have a Fourier decomposition

- oriented or unoriented depending on whether we want to do type II or type I strings.

However, in your paper, you define loop space ()\mathcal{L}(\mathcal{M}) as simply the space of smooth embeddings of S 1S^1 into \mathcal{M}

(1)():=C (S 1,).(2.1) \mathcal{L}(\mathcal{M}) := C^\infty(S^1,\mathcal{M}).\quad\quad\quad(2.1)

Would it be possible to give a more precise definition of what loop space is? :) If it is just the space of maps from S 1S^1 to \mathcal{M}, then I can see that it is unbased, but I don’t see how it is parameterized. I also don’t see how you can make that statement about Fourier decomposition precise either. This reminds me of an old discussion with Toby Bartels about the difference between “oriented” and “orientable.” Should loop space be the space of “parameterized” embeddings of S 1S^1 into \mathcal{M} or the space of “parameterizable” embeddings of S 1S^1 into \mathcal{M}. The latter seems redundant. The former means that we actually chose a parameterization for the loop and two different parameterizations would be different points in loop space. I really don’t think that is what you want.

I tend to think that loop space should be the space of unparameterized (but parameterizable) maps from S 1S^1 to \mathcal{M} and a choice of parameterization is akin to choosing a coordinate chart.

Just prior to your equation

(2)UV| X=dσg μν(X(σ))U μ(X(σ))V ν(X(σ)),(2.2) \left. U\cdot V\right|_X = \int d\sigma g_{\mu\nu}(X(\sigma)) U^\mu(X(\sigma)) V^\nu(X(\sigma)),\quad\quad\quad(2.2)

I think you should say something like, “In a coordinate chart and for a given parameterization, the inner product is given by…” In fact, what are you intergrating over? Is it σ:[0,2π]\sigma:[0,2\pi]\to\mathcal{M} with σ(0)=σ(2π)\sigma(0) = \sigma(2\pi)? If so, you should put the domain of integration explcitily, e.g.

(3)UV| X= [0,2π]dσg μν(X(σ))U μ(X(σ))V ν(X(σ)),(2.2), \left. U\cdot V\right|_X = \int_{[0,2\pi]} d\sigma g_{\mu\nu}(X(\sigma)) U^\mu(X(\sigma)) V^\nu(X(\sigma)),\quad\quad\quad(2.2),

or something. At the least, you should say that σ\sigma is a parameterized loop instead of a parameterizable loop.

Argh. My brain hurts. It seems like you need quite a bit of surgery to clean things up. Let’s say we have vector fields U,VU,V on \mathcal{M} and a map

(4)γ:S 1, \gamma: S^1\to\mathcal{M},

then could we simply define an inner product on loop space via

(5)UV| γ= S 1γ *(g(U,V))vol, \left. U\cdot V\right|_\gamma = \int_{S^1} \gamma^*(g(U,V))\vol,

where g(U,V)g(U,V) is, of course, a 0-form on \mathcal{M} and vol\vol is a volume form on S 1S^1 obtained from γ *(g)\gamma^*(g), i.e. pulling back the metric from \mathcal{M} to S 1S^1? A rule of thumb of mine, and I think it is a good rule, is that you do not really understand something until you can express it in a coordinate free manner. If the above is not correct, could you give the correct coordinate free version of your inner product on loop space? This does seem to be different than what you have because it seems like your vol\vol is normalized so that

(6) S 1vol=2π. \int_{S^1} \vol = 2\pi.

Which is more natural? I think mine is, but I could be wrong :) Then again, mine has the affect that if the loop shrinks to a point, then the inner product goes to zero as well. I am not sure if this is a good or bad thing, but it almost seems like a good thing to me. Let’s say that we are in a region of \mathcal{M} where g(U,V)g(U,V) is constant. Your definition will give the same inner product regardless of how big the loop is within the region and regardless of how many times it wraps around. Is this really what you want? I would almost expect that if a loop wrapped around the same curve twice, then the inner product would be scaled by a factor of 2.

Here is a crazy thought…

What if we define an inner product on points AND loops via

(7)UV| (p,γ)=g(U,V)| p+1T S 1γ *(g(U,V))vol. \left. U\cdot V\right|_{(p,\gamma)} = \left. g(U,V)\right|_p + \frac{1}{T} \int_{S^1} \gamma^*(g(U,V))\vol.

In this case, if the loop were tiny with respect to gg, i.e. if

(8) S 1volT, \int_{S^1} \vol \ll T,

then the point inner product will dominate, but when the loop grows, then it begins to offset the contribution from the point. In this way, the point inner product is kind of like a low energy limit. Or something :) Of course, once you iterate once, why not continue? Perhaps the inner product of higher dimension objects may be written something like

(9)UV=exp(αL)g(U,V), U\cdot V = \exp(\alpha L) g(U,V),

where

(10)Lg(U,V)= S 1γ *(g(U,V))vol. L g(U,V) = \int_{S^1} \gamma^*(g(U,V)) \vol.

Sidenote: this makes me want to consider “point space” 𝒫()\mathcal{P}(\mathcal{M}), i.e. the space of maps from a point into \mathcal{M}. If you make this rigorous, you should end up with 𝒫()\mathcal{P}(\mathcal{M})\sim\mathcal{M}. Doing so might shed some light on how to make ()\mathcal{L}(\mathcal{M}) rigorous.

Continuing with the previous thought…

(11)L 2g(U,V)= S 1γ 2 *[ S 1γ 1 *(g(U,V))vol 1]vol 2, L^2 g(U,V) = \int_{S^1} \gamma_2^*[\int_{S^1} \gamma_1^*(g(U,V))\vol_1]\vol_2,

where γ 1\gamma_1 in ()\mathcal{L}(\mathcal{M}) and γ 2 2():=(())\gamma_2\in\mathcal{L}^2(\mathcal{M}) := \mathcal{L}(\mathcal{L}(\mathcal{M})). Finally,

(12)L ng(U,V)= S 1γ n *[ S 1γ n1 *[ S 1γ 1 *(g(U,V))vol 1]vol n1]vol n, L^n g(U,V) = \int_{S^1} \gamma_n^*[\int_{S^1} \gamma_{n-1}^*[\cdots \int_{S^1} \gamma_1^*(g(U,V))\vol_1]\cdots \vol_{n-1}]\vol_n,

where γ i i()\gamma_i\in\mathcal{L}^i(\mathcal{M}).

I am kind of on the fence with this one. It could either be brilliant, or just another one of my whacky ideas. What do you think? :)

Eric

PS: A friend of mine and I have a saying, “All ideas are brilliant ideas for the first five minutes.” Let’s see if this one survives :)

Posted by: Eric on July 18, 2004 8:16 AM | Permalink | Reply to this

Re: Calculus on Loop Space

One more thing…

It is probably obvious, but I just want to point out a couple of things. First, if ϕ\phi is a 0-form on \mathcal{M}, then

(1)Lϕ= S 1γ *(ϕ)vol L \phi = \int_{S^1} \gamma^*(\phi) vol

is a 0-form on ()\mathcal{L}(\mathcal{M}) although it might be interpretted as a 1-form on \mathcal{M},

(2)L nϕ= S 1γ n *[ S 1γ 1 *(ϕ)vol 1]vol n L^n \phi = \int_{S^1} \gamma_n^*[\cdots \int_{S^1} \gamma_1^*(\phi) \vol_1]\cdots \vol_n

is a 0-form on n()\mathcal{L}^n(\mathcal{M}) although it might be interpretted as an nn-form on \mathcal{M}.

In particular,

(3)L n1 L^n 1

is a 0-form on n()\mathcal{L}^n(\mathcal{M}). When you integrate, i.e. evaluate, it at a point γ n n()\gamma_n\in \mathcal{L}^n(\mathcal{M}), you get

(4)L n1| γ n="volume of γ n ". \left. L^n 1\right|_{\gamma_n} = \text{"volume of }\gamma_n\text{ "}.

Of course, a point γ n n()\gamma_n\in\mathcal{L}^n(\mathcal{M}) maps to an nn-loop in \mathcal{M} and integrating the volume form on \mathcal{M} over the image of γ n\gamma_n also gives the volume of the image. In this way, L n1L^n 1 may also be thought of as the volume form on \mathcal{M}.

I like this :) I am anxious to hear what you think.

Another thing…

It might be true in general, but it would be interesting to find under what conditions there exists a 1-form α\alpha such that for any 0-form ϕ\phi we have

(5) S 1γ *(ϕ)vol= S 1γ *(α). \int_{S^1} \gamma^*(\phi) \vol = \int_{S^1} \gamma^*(\alpha).

If such an α\alpha existed, we would have

(6)Lϕ= γα, L \phi = \int_\gamma \alpha,

which would make it explicit how LϕL \phi may be thought of as a 1-form on \mathcal{M}.

Eric

PS: This somehow reminds me of the discussion over in the “big boy” thread :) I think that if you define things correctly, then for a given torus in \mathcal{M}, there should be only one point in 2()\mathcal{L}^2(\mathcal{M}) having this torus as its image. If you get anything different, I would think that perhaps you have inappropriately defined 2()\mathcal{L}^2(\mathcal{M}) and should rethink things. Maybe 2()\mathcal{L}^2(\mathcal{M}) should be equivalence classes of “loops of loops” since one loop of a loop that has the same image as another loop of a loop is really the same 2-loop, but with a different (generalized notion of) parameterization. A 2-loop is not a choice of a loop of a loop, it is something such that such a parameterization can be made. Just like a loop should not be a choice of parameterization, but something that can be parameterized in a certain way, i.e. it is parameterizable. A 2-loop is a parameterizable loop of loops, not a particular choice of parameterization.

If I am not being redundant enough, let me say it another way. If you look at a torus, there are tons of ways to write it as a loop of loops. However, I am suggesting that making such a choice is a kind of parameterization of the 2-loop, but the 2-loop, and consequently the space of 2-loops, should be independent of how you choose to split a torus into loops of loops.

This is not just a made up idea out of the blue. Rather, it is suggested by the explicit form of

(7)L nϕ. L^n \phi.

When you look at the full expanded version, it is clear that the iterated integrals (not necessarily having anything to do with those of Chen) amount to choices of parameterizations of an nn-loop and the integral should be independent of this choice. It is like a wicked version of Fubini’s theorem :)

Posted by: Eric on July 18, 2004 9:32 AM | Permalink | Reply to this

Re: Calculus on Loop Space

Hi again,

It is painfully obviously how a choice of a way to express a torus as a loop of loops is a kind of parameterization. In fact, it is more than “kind of” a parameterization. It is nothing BUT a choice of coordinates. Well almost :) It is a choice of coordinate curves. What I described as a wicked version of Fubini’s theorem is nothing but plane old Fubini’s theorem :) The only difference is that we specify coordinate curves without putting numbers on them :)

Now I can say it with a fair amount of confidence. There should be only one (Highlander?) point in 2()\mathcal{L}^2(\mathcal{M}) having a given torus as its image.

Eric

PS: This statement will probably have to be altered if you start considering oriented loops.

Posted by: Eric on July 18, 2004 9:55 AM | Permalink | Reply to this

Re: Calculus on Loop Space

Good morning!

It seems that perhaps you took the day off of SCT today. I hope you had a nice day :)

I was just thinking about this stuff a little more. Unless I am just way off, it seems like given a 0-form ϕ\phi on \mathcal{M}, we can think of

(1)L pϕ L^p \phi

as either a 0-form on p()\mathcal{L}^p(\mathcal{M}) or a pp-form on \mathcal{M}. However, we should not forget all the intermediate interpretations. L pϕL^p \phi may also be thought of as a (p1)(p-1)-form on ()\mathcal{L}(\mathcal{M}), a (p2)(p-2)-form on 2()\mathcal{L}^2(\mathcal{M}), or in general as a (pq)(p-q)-form on q()\mathcal{L}^q(\mathcal{M}).

In particular, given a pp-form α\alpha on \mathcal{M}, we can think of LαL\alpha as a pp-form on ()\mathcal{L}(\mathcal{M}). Similarly, if XX is a vector field on \mathcal{M}, then LXLX is a vector field on ()\mathcal{L}(\mathcal{M}).

With that said, after looking at your paper some more, it seems like I might have an alternative way to look at things.

First, given a 1-form α\alpha and a vector field XX on \mathcal{M}, define a map

(2)contract(αX):=α,X. \contract(\alpha\otimes X) := \langle\alpha,X\rangle.

We want the contraction map to commute with the loop map, i.e.

(3)Lcontract=contractL L \circ \contract = \contract \circ L

so that

(4)Lα,X=contract(L(αβ)). L\langle\alpha,X\rangle = \contract(L(\alpha\otimes\beta)).

Furthermore, define a tensor product on loop space in terms of the tensor product on the target space via

(5)(Lα)(Lβ):=L(αβ). (L\alpha)\otimes (L\beta) := L(\alpha\otimes\beta).

so that we have

(6)Lα,LX=Lα,X. \langle L\alpha,L X\rangle = L \langle\alpha,X\rangle.

due to the fact that LL and contract\contract commute. This also gives us the wedge product on loop space in terms of that on target space via

(7)(Lα)(Lβ)=L(αβ). (L\alpha)\wedge(L\beta) = L(\alpha\wedge\beta).

Now if we write

(8)XY=g(X,Y)=contract[contract(gX)Y] X\cdot Y = g(X,Y) = \contract[\contract(g\otimes X)\otimes Y]

and

(9)LXLY=(Lg)(LX,LY)=contract[contract((Lg)(LX))(LY)], LX\cdot LY = (Lg)(LX,LY) = \contract[\contract((Lg)\otimes(LX))\otimes (LY)],

then using the above relations, we have

(10)LXLY=L(XY). LX\cdot LY = L(X\cdot Y).

This seems to be the essence of your Equation (2.2). It seems like the important thing for constructing differential geometric operators on loop space is that

(11)(1)Lcontract=contractL (1)\quad L\circ\contract = \contract\circ L

and

(12)(2)L is an algebra morphism. (2)\quad L\text{ is an algebra morphism.}

Not that I know anything about category theory, but it almost seems like LL is a functor or something :)

I am also willing to bet that we can write

(13)(3)dL=Ld (3)\quad d\circ L = L\circ d

so that

(14)d[(Lα)(Lβ)]=[d(Lα)](Lβ)+(1) p(Lα)d(Lβ). d[(L\alpha)\wedge(L\beta)] = [d(L\alpha)]\wedge(L\beta) + (-1)^p (L\alpha)\wedge d(L\beta).

This is all pretty neat and if it is at least partially correct, it takes a lot of the mystery out of the stuff in your notes. However, there still are some questions.

What are the conditions such that for any pp-form α L\alpha_L on ()\mathcal{L}(\mathcal{M}) there exists a pp-form α\alpha on \mathcal{M} such that α L=Lα\alpha_L = L\alpha?

When this condition is satisfied, we can easily translate back and forth between ()\mathcal{L}(\mathcal{M}) and \mathcal{M}. In fact, we could easily translate back and forth between p()\mathcal{L}^p(\mathcal{M}) and q()\mathcal{L}^q(\mathcal{M}). We can easily get differential geometry on more loopy spaces just by further applications of LL :)

(15)(1)L pcontract=contractL p (1)\quad L^p\circ\contract = \contract\circ L^p
(16)(2)L p is an algebra morphism. (2)\quad L^p \text{ is an algebra morphism.}
(17)(3)dL p=L pd. (3)\quad d\circ L^p = L^p\circ d.

Ok. Maybe one more thing before submitting this :)

We have

(18)i Xα=contract(αX). i_X \alpha = \contract(\alpha\otimes X).

so that

(19)L(i Xα)=i LXLα. L(i_X \alpha) = i_{LX} L\alpha.

Therefore,

(20)d LX=d+i LX d_{LX} = d + i_{LX}

and

(21) LX=d LX 2=[d,i LX]. \mathcal{L}_{LX} = d_{LX}^2 = [d,i_{LX}].

Furthermore, in a coordinate chart, we have

(22){L μ,L ν}=L{ μ, ν}=0 \{L\mathcal{E}^{\dagger\mu},L\mathcal{E}^{\dagger\nu}\} = L\{\mathcal{E}^{\dagger\mu},\mathcal{E}^{\dagger\nu}\} = 0
(23){L μ,L ν}=L{ μ, ν}=0 \{L\mathcal{E}_{\mu},L\mathcal{E}_{\nu}\} = L\{\mathcal{E}_{\mu},\mathcal{E}_{\nu}\} = 0
(24){L μ,L ν}=L{ μ, ν}=Lδ μ ν. \{L\mathcal{E}_{\mu},L\mathcal{E}^{\dagger\nu}\} = L\{\mathcal{E}_{\mu},\mathcal{E}^{\dagger\nu}\} = L\delta_\mu^\nu.

Unless I’m mistaken, this seems to be a simple restatement of your Equation (2.21), but this makes a little more sense to me.

Now, it also seems that since

(25)d LX=Ld X d_{LX} = L d_{X}

that we’d also have

(26)e LWd LXe LW=L[e Wd Xe W]. e^{-LW} d_{LX} e^{LW} = L[e^{-W} d_X e^{W}].

Is it possible that my dream can be fulfilled and the stuff you are doing with deformations on loop space can be interpretted as deformations on \mathcal{M} followed simply by an application of LL? :)

I really gotta run now! More later :)

Eric

Posted by: Eric on July 19, 2004 12:37 AM | Permalink | Reply to this

Re: Calculus on Loop Space

Hello again,

I was just working through some details to see if we’d also have

(1)(4)Ld =d L. (4)\quad L\circ d^\dagger = d^\dagger\circ L.

It appears that this is probably the case, but the derivation highlighted something I hadn’t thought of yet. If vol\vol is the volume nn-form on \mathcal{M}, then it makes sense to consider LvolL\vol as the volume nn-form on ()\mathcal{L}(\mathcal{M}). The somewhat strange thing is that a pp-form on ()\mathcal{L}(\mathcal{M}) appears like a (p+1)(p+1)-form on \mathcal{M}. If LvolL\vol is an nn-form on ()\mathcal{L}(\mathcal{M}), then there is no corresponding (n+1)(n+1)-form on \mathcal{M}. So there are some forms on ()\mathcal{L}(\mathcal{M}) that have no counterpart on \mathcal{M}. If we don’t let this bother us, then we can write down a global inner product of forms on ()\mathcal{L}(\mathcal{M}) via

(2)[Lα,Lβ]= ()(LαLβ)Lvol= ()L[(αβ)vol]. [L\alpha,L\beta] = \int_{\mathcal{L}(\mathcal{M})} (L\alpha\cdot L\beta) L\vol = \int_{\mathcal{L}(\mathcal{M})} L[(\alpha\cdot \beta) \vol].

It seems reasonable that we should have

(3) ()L[(αβ)vol]= (αβ)vol, \int_{\mathcal{L}(\mathcal{M})} L[(\alpha\cdot \beta) \vol] = \int_{\mathcal{M}} (\alpha\cdot\beta) \vol,

which reminds me of pull-back, so that

(4)[Lα,Lβ]=[α,β]. [L\alpha,L\beta] = [\alpha,\beta].

Once you have this, a few lines gives Equation (4) above.

So far I have no evidence to suggest that anything I am saying makes any sense, but that has never stopped me before :) Now, we can take all of this and write down an obvious loop action for loop Maxwell’s equations on loop space. Following standard procedures, after a few lines we’d end up with the equations of motion for loop Maxwell’s equations, which are given by

(5)dLF=0 d LF = 0

and

(6)d LF=Lj, d^\dagger LF = Lj,

where LF=dLALF = d LA for some 1-form LALA on loop space. If this is correct, then it means that given any solution FF to Maxwell’s equations on \mathcal{M}, we automatically get a solution LFLF to loop Maxwell’s equations by simply apply LL to FF. However, loop Maxwell’s equations are more like 2-form Maxwell’s equations when viewed from the perspective of \mathcal{M}.

Since Maxwell’s equations are conformal, I don’t see any reason why loop Maxwell’s equations would not be conformal. Hence, we’ve got a conformal field theory on loop space. This sounds a lot like string theory to me :) Wouldn’t it be wild if loop Maxwell’s equations were related to string theory? :) I can’t possibly imagine that they aren’t.

It is probably time to review some old discussions on pp-form electromagnetism :)

Eric

PS: This stuff has messed with my sleep schedule. I couldn’t get to sleep until nearly 5am last night. Then I slept all day today. What are the chances I will be able to sleep tonight. I’ve got to be to work at 8:30am. What are the chances that is going to happen? :)

Posted by: Eric on July 19, 2004 5:06 AM | Permalink | Reply to this

Re: Calculus on Loop Space

It seems that perhaps you took the day off of SCT today. I hope you had a nice day :)

Yes, we had a big party on Saturday, and including preparation and ‘post-production’, this really took all of the week end!

Now I am working hard to reply to all the comments that you have posted while I was relaxing. ;-)

It is great to see you thinking so much about this stuff, but we need to get in sync again!

Most of what you write about that L operator makes sense to me, but, as I have said in another comment posted a few minutes ago, I think one should not restrict attention to loop space objects which have target space counterparts, and one should not necessarily include that induced volume factor.

So I would not give the operator LL the fundamental meaning that you seem to have in mind, even though it certainly exists and plays its role. For instance, except for the volume factor, it is, I think, precisely the operation that I for instance indicate in equation (3.1) of hep-th/0407122 applied to a differential form on target space.

So the range of LL would be those objects on loop space which directly come from integrating any target space object around the loop, roughly.

I think what you write about deformations at the very end is true, but it only applies to deformations with e somethinge^{something} where ‘something’ is in the range of LL. This is not true for all deformations that one might want to consider. For instance the deformation which induces a gauge field background involves the vector field X X^\prime, which does not have any target space counterpart. On the other hand, in the cases where ‘something’ is of the correct form, then I think the equation that you write down is correct.

I’ll stop at this point and wait for your response first.

Posted by: Urs Schreiber on July 19, 2004 11:14 AM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

I am anxious to hear what you think.

I think that what you write does make sense. But, as in I said in my previous comment, it is not quite what I need, I think, due to that volume factor, for instance.

Here is how I would suggest to think about these matters:

So let loop space be a suitable space of maps from (0,2π)(0,2\pi) into target space. A vector field VV on this space is formally something like

(1)V=V (μ,σ)(X) (μ,σ)= 0 2πdσV μ(σ)δδX μ(σ). V = V^{(\mu,\sigma)}(X)\partial_{(\mu,\sigma)} = \int_0^{2\pi} d\sigma\, V^\mu(\sigma) \frac{\delta}{\delta X^\mu(\sigma)} \,.

A 1-form WW on loop space is dual to that and the pairing must be

(2)W(V)=W (μ,σ)V (μ,σ)= 0 2πW μ(σ)V μ(σ)dσ. W(V) = W_{(\mu,\sigma)}V^{(\mu,\sigma)} = \int_0^{2\pi} W_\mu(\sigma) V^\mu(\sigma) \, d\sigma \,.

In order to get an inner product we have to specify a metric on loop space. The metric that I need is

(3)G (μ,σ)(ν,κ)(X)=g μν(X(σ))δ(σ,κ). G_{(\mu,\sigma)(\nu,\kappa)}(X) = g_{\mu\nu}(X(\sigma))\delta(\sigma,\kappa) \,.

You could in principle pick another metric, for instance one involving the induced volume factor. That would give the inner product that you are proposing, I think.

One important aspect of the above metric is that K (μ,σ)=X (μ,σ)K^{(\mu,\sigma)} = X^{\prime (\mu,\sigma)} is a Killing vector.

I think that if you define things correctly, then for a given torus in \mathcal{M}, there should be only one point in 2()\mathcal{L}^2(\mathcal{M}) having this torus as its image.

That would be the case on unparameterized loop space. In the end, this is what is physically relevant, but to set up the formalism we really want parameterized loop space. The space of functions on parameterized loop space is our ‘kinematical space’. By restricting to those functions which satisfy the physical constraints, which in particular includes rep invariance, we get to the ‘physical space’ of functions on unparameterized loop space.

If you look at a torus, there are tons of ways to write it as a loop of loops. However, I am suggesting that making such a choice is a kind of parameterization of the 2-loop,

Yes, exactly. All these are physically equivalent, but correspond to different points in the parameterized loop-loop space.

I think this is important. Because if you want to associate a ‘holonomy’ with a surface in \mathcal{M} you really need a connection on parameterized ()\mathcal{L}(\mathcal{M}) which assignes the same holonomy to all the corresponing elements in 2()\mathcal{L}^2(\mathcal{M}). This is a pretty strong condition. It is solved trivially by flat connections on parameterized ()\mathcal{L}(\mathcal{M}). Perhaps it is solved only by these. This is what I expect from what I wrote here.

Posted by: Urs Schreiber on July 19, 2004 10:27 AM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

Hi Eric -

you are right, that definition I give is not good, and even wrong. For one, it should not read ‘embedding’ because self-intersections must be allowed. I’ll change that. Thanks for noticing, somehow this embarrasing mistake went unnoticed so far.

Then, I am really thinking of a space of maps from the interval σ(0,2π)\sigma \in (0,2\pi) into target space which are periodic with period 2π2\pi. So there is a parameterization, and we can get rid of it by going to the subspace of functions on this loop space which are invariant under reparameterization, i.e. the space of functions on which d K\mathbf{d}_K and all of its modes are nilpotent.

Let’s say we have vector fields UU ,VV on \mathcal{M} and a map

Wait, that’s not what we need. We really need vector fields on loop space. Not every vector field on loop space is related to one on target space. This remark pertains to the other constructions that you mention. They don’t seem to fit into the context which I am considering.

Still, one can consider point particle limits ans the like. These correspond to maps which are almost constant on (0,2π)(0,2\pi).

Posted by: Urs Schreiber on July 19, 2004 9:35 AM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

One more comment. You wrote:

Your definition will give the same inner product regardless of how big the loop is within the region and regardless of how many times it wraps around. Is this really what you want? I would almost expect that if a loop wrapped around the same curve twice, then the inner product would be scaled by a factor of 2.

The definition (2.2) that I give (which, by the way, is not my invention) does not give the same result for loop that wind several times around themselves. For instance if you substitute X(σ)X(\sigma) with X˜(σ):=X(2σ)\tilde X(\sigma) := X(2\sigma) in that equation you do not get the same result.

Posted by: Urs Schreiber on July 19, 2004 9:46 AM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

Hi Urs,

It is somewhat encouraging that you think that maybe the stuff I wrote is not completely garbage :)

About the inner product, maybe I do not understand your notation. Let’s consider a region of \mathcal{M} where g(U,V)g(U,V) is constant and without loss of generality set g(U,V)=1g(U,V) = 1 so that we have

(1)UV| X= [0,2π]dσ. \left. U\cdot V \right|_X = \int_{[0,2\pi]} d\sigma.

How is this ever going to give you anything different than 2π2\pi regardless of whether you reparameterize the curve. If you set σ=2σ\sigma' = 2\sigma you still get

(2)UV| X= [0,2π]dσ=12 [0,4π]dσ=2π. \left. U\cdot V \right|_X = \int_{[0,2\pi]} d\sigma = \frac{1}{2} \int_{[0,4\pi]} d\sigma' = 2\pi.

I think I misunderstood what you meant :) I almost suspect that you want to tell me that if you set σ=2σ\sigma' = 2\sigma, then the loop is really the same curve traversed twice. If this is the case, then I would argue that this goes against the way you stated the definitions, i.e. a curve is a map c:[0,2π]c:[0,2\pi]\to\mathcal{M}. If you adhere to this then the inner product over a constant g(U,V)g(U,V) does always give the same result regardless of the size of the loop or how many times it wraps around. If you want to allow parameterized curves from any connected interval IRI\subset{R} to \mathcal{M}, then I think you might need to state the definition of your inner product differently. Otherwise, it seems the inner product will depend on what parameterization you choose, which would be unacceptible I think.

I also do understand how my inner product differs from your’s in that I have a volume form, and whether or not your inner product is your invention or not, I think that we should consider the slight possibility that mine is the one we want to use. Maybe :)

Gotta get ready for work. Argh! :)

More later,
Eric

Posted by: Eric on July 19, 2004 12:48 PM | Permalink | Reply to this

Re: Calculus on Loop Space

Hi Eric -

you wrote:

It is somewhat encouraging that you think that maybe the stuff I wrote is not completely garbage :)

When wondering about my enthusiasm you should always keep in mind that when I write my morning comments I do that after having moderated 30+ spr posts, possibly some sps posts, have looked into my private email and read a couple of comments on the SCT. This may be the reason, as happened recently, that I may not immediately see all the benefits of the constructions that you propose. Just be patient with me, I will get it eventually! :-)

Regarding the inner product, let me say the following:

In my conventions σ\sigma always runs from 00 to 2π2\pi, by definition. The loop parameterized by this σ\sigma is X:(0,2π)X(σ)X: (0,2\pi) \to X(\sigma). To any given point XX in parameterized loop space there is a point X˜\tilde X defined by X˜(σ)=X(2σ)\tilde X(\sigma) = X(2\sigma) which is the same loop as XX but traversed twice. In general, UV| XUV| X˜\left. U\cdot V\right|_X \neq \left. U\cdot V\right|_{\tilde X}, as is manifest from that formula (2.2), which in full glory reads

(1)UV| X= 0 2πdσg μν(X(σ))U μ(X)(σ)V ν(X)(σ). \left.U\cdot V\right|_X = \int_0^{2\pi} d\sigma\, g_{\mu\nu}(X(\sigma)) U^\mu(X)(\sigma) V^\nu(X)(\sigma) \,.

If, of course, the target space contraction is independent of XX, then so is UV| X\left. U\cdot V\right|_X. But there is nothing wrong with that, as far as I can see.

I think that we should consider the slight possibility that mine is the one we want to use. Maybe :)

Ok. The reason why I use the particular loop space metric that I do is that this way the inner product on loop space reproduces the usual inner product on the string’s Hilbert space. So in particular this way d K+d K \mathbf{d}_K + \mathbf{d}^\dagger_K really is one of the supercharges on the worldsheet. This is related to the fact that in this metric K (μ,σ)=X (μ,σ)K^{(\mu,\sigma)} = X^{\prime (\mu,\sigma)} is a Killing vector.

But you are right that in principle we could consider other metrics on loop space, given some particular metric on target space. For general choices X X^\prime won’t be a Killing vector anymore, though, which would be undesirable, since it is the flow of X X^\prime that we want to divide out by in the end.

For these reasons I pretty strongly tend to stick to the metric that I am using. But I have to admit that I didn’t think about if there are other metrics that might also be intersting. Maybe there are. But I expect that the metric you proposed does not have X X^\prime as a Killing vector.

Posted by: Urs Schreiber on July 19, 2004 1:46 PM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

Hi Urs,

I said

It is somewhat encouraging that you think that maybe the stuff I wrote is not completely garbage :)

and your response almost seemed apologetic. No need! I am genuinely happy if you think what I wrote is not completely garbage! :) If you think it is even more than slightly interesting, that is a bonus :)

As usual, I think we have some notational issues to work through, but I think it is worth struggling through the pain so that we get on the same page.

For general choices XX' won’t be a Killing vector anymore, though, which would be undesirable, since it is the flow of XX' that we want to divide out by in the end.

I could be wrong, but I think that our inner product are related by a scale factor. If that is the case, then I would think XX' should still be a Killing vector.

I’m still not gone for work yet! :)

More later,
Eric

Posted by: Eric on July 19, 2004 1:59 PM | Permalink | Reply to this

Re: Calculus on Loop Space

I could be wrong, but I think that our inner product are related by a scale factor. If that is the case, then I would think X X^\prime should still be a Killing vector.

That’s a straightforward computation, but right now I don’t have the leisure to do it. One would first need to compute the covariant Levi-Civita derivative of the metric that you propose and then compute the symmetrized covariant derivative of X X^\prime.

But why exactly do you think the inner product that you proposed is ‘good’, or better than any other choice, anyway?

Originally it seemed that it was the coordinate-dependent formulation of ‘my’ inner product that you objected to. But all I do is specify a metric GG on loop space, induced from that on taregt space, and then set the inner product to UV=G(U,V)U\cdot V = G(U,V), as usual.

Posted by: Urs Schreiber on July 19, 2004 2:32 PM | Permalink | PGP Sig | Reply to this

Re: Calculus on Loop Space

Hi Urs,

I’m in a rush, but here is a quick note…

But why exactly do you think the inner product that you proposed is ‘good’, or better than any other choice, anyway?

The answer to this question lies in a fact that we seem to agree upon. Namely,

If, of course, the target space contraction is independent of XX, then so is UV| X\left. U\cdot V\right|_X. But there is nothing wrong with that, as far as I can see.

I think that maybe we should reconsider this fact. It is somewhat troubling to me. I think that if the contraction on target space is independent of XX, then UV| X\left. U\cdot V\right|_X should be proportional to the length of XX in \mathcal{M}. In this way, the generalized inner product I wrote down

(1)UV=exp(αL)g(U,V) U\cdot V = exp(\alpha L) g(U,V)

reduces to

(2)UV=g(U,V) U\cdot V = g(U,V)

as the string (and all subsequent higher nn-loops) shrink to zero.

This is very nice in my opinion :)

I also very much like the fact that

(3)L n1vol, L^n 1 \leftrightarrow \vol,

i.e. L n1L^n 1 corresponds to the volume form on \mathcal{M}.

I am not immovable about this idea, but I am growing more fond of it by the minute.

I think the concerns you brought up will disappear once the details are worked out. Maybe not, in which case the original inner product would clearly be superior :)

Eric

Posted by: Eric on July 19, 2004 3:50 PM | Permalink | Reply to this

Inner Products

I just had a thought…

Now, it seems to me like the original inner product you are suggesting is like an averaging of the inner product I am suggesting. To make this explicit, in the notation I will keep

(1)UV U\cdot V

for my inner product and write the original one as

(2)UV, \langle U\cdot V\rangle,

where the two are related by

(3)UV| X=UV| X Xvol. \left. U\cdot V\right|_X = \langle\left. U\cdot V\rangle\right|_X \int_X \vol.

As I’ve said, UVU\cdot V vanishes as XX shrinks to a point pp, but we have

(4)lim XpUV| X=g(U,V)| p. \lim_{X\to p} \left.\langle U\cdot V\rangle\right|_X = \left. g(U,V)\right|_p.

I can see why this would be desirable. It provides a way for a tiny loop to behave like a point. This is nice. On the other hand, I seem to be suggesting something a bit more radical. Well, which is really more radical? I am including points, loops, 2-loops, … in my inner product

(5)UV=exp(αL)g(U,V) U\cdot V = exp(\alpha L) g(U,V)

and I get my low energy limit making string stuff vanish as the string shrinks to a point. We are left with pointlike stuff because pointlike stuff is included in the inner product. In your approach, you begin with 1-loops and work your way up from there. Points never make an appearance. The stringy physics disappears as the string shrinks because of an assumed smoothness of g(U,V)g(U,V) and the average of g(U,V)g(U,V) over a loop approaches g(U,V)g(U,V) at a point as the loop shrinks to the point.

Ok. I see that I am butting heads with 30 years of string theory :) The question is, “What inner product is more natural?” On the one hand, we have an averaging inner product that converges to a point-like value when the pp-loop shrinks to a point. On the other hand, we have a kind of “instantaneous” inner product that vanishes as the pp-loops shrink to a point, but the entire inner product does not vanish when evaluate at points. In such cases, it gives precisely the expected value. I see both as being viable and each would give rise to different physics in subtle ways.

I feel like I probably didn’t explain that very clearly so I will most likely try again later, but I’ll let this go for now. I hope you see how your inner product is really an average. To see this, try expressing your (when I say “your”, I mean the “the standard inner product you are suggesting”. It’s just shorthand :)) inner product in a coordinate free and parameter free manner. You will find that you cannot :) Your inner product is coordinate and parameter independent, as it should be, (well I hope it is!), but expressing it in a coordinate free and parameter free notation will not be easy. You’ll see what I’m talking about if you try :)

Gotta run!

Eric

Posted by: Eric on July 19, 2004 8:09 PM | Permalink | Reply to this

Re: Inner Products

Hi Eric -

in which sense does ‘your’ inner product generalize to higher degrees while ‘mine’ does not? I’d think what you can do with one of these you can do with the other.

You may be right about your intuition about ‘your’ inner product, but right now I don’t see any fully convincing argument that it is one that is interesting.

In fact, even on purely aesthetical grounds I like the standard inner product better, because here the integration over σ\sigma is just index contraction. In particular, we could consider polygon space instead of loop space and then σ\sigma would take on discrete values and would even more look like an index just like μ\mu. From this perspective just summing over σ\sigma seems much more natural than having a weighted sum over it, as in your proposal.

(All these are arguments over and above the fact that the standard inner product is the very obvious one in Fock space language. But, ok, I can imagine that there are in principle other interesting inner products than that standard one.)

Posted by: Urs Schreiber on July 19, 2004 10:51 PM | Permalink | PGP Sig | Reply to this

Re: Inner Products

Hi Urs! :)

I was wondering why you were so quiet today. Now I see that you got to spend a great day talking with t’Hooft :) Very nice :)

in which sense does ‘your’ inner product generalize to higher degrees while ‘mine’ does not? I’d think what you can do with one of these you can do with the other.

Sorry. I knew I wasn’t explaining myself very well. I did not mean to imply that your inner product doesn’t generalize to higher degrees. It obviously does. Rather, what I was trying to say is that mine has a kind of natural way to bundle all of the inner products together in a way that I think seems kind of natural. I didn’t see an obvious way to do the same with the standard inner product, but there might very well be. Then again, maybe bundling all the inner products isn’t even a good idea to start with.

You may be right about your intuition about ‘your’ inner product, but right now I don’t see any fully convincing argument that it is one that is interesting.

Sorry if you feel like I am spewing nonsense. I probably am :|

Here is something that might be a little interesting about my inner product.

On \mathcal{M}, we have a global inner product of forms

(1)[α,β]= (αβ)vol. [\alpha,\beta] = \int_{\mathcal{M}} (\alpha\cdot\beta) \vol.

It seems to me that we need to use my LL map together with its vol\vol weighting in order to define a meaningful global inner product on ()\mathcal{L}(\mathcal{M}) such that

(2)[Lα,Lβ]=[α,β]. [L\alpha,L\beta] = [\alpha,\beta].

I could be wrong, but it seems like this relation depends crucially on having

(3)Lvol L\vol

be the volume form on loop space. For this to make sense, I think you need the vol\vol incorporated in LL. If vol\vol is incorporated in LL, the vol\vol should likewise be incorporated in the inner product.

Let me just clarify. I am not suggesting that the standard inner product is wrong in any way and we should get rid of it. I am simply exploring the possibility that maybe there is an alternative. This alternative would give slightly different physics. My inner product could very well turn out to be junk.

In fact, even on purely aesthetical grounds I like the standard inner product better, because here the integration over σ\sigma is just index contraction.

Yes yes. Sorry. This is a very good point. I hope I am not trying your patience.

All these are arguments over and above the fact that the standard inner product is the very obvious one in Fock space language.

I will try to find references (on my own, maybe Szabo or something) and look this up, but I’ll just remind you that I don’t even know what is the standard Fock space language for strings. I imagine I might have similar reservations there, but maybe I should keep those to myself or I might push you over the edge :)

Gotta run!

More later,
Eric

Posted by: Eric on July 20, 2004 12:56 AM | Permalink | Reply to this

Re: Inner Products

I was wondering why you were so quiet today. Now I see that you got to spend a great day talking with t’Hooft :) Very nice :)

Yes, it was in fact so nice that I am now suffering from the same sleeplessness that you had yesterday (or was it today). ;-) It is half past three in the morning right now and since I am still not sleeping I thought I could just as well check the SCT.

Ok, let’s see. You write:

Rather, what I was trying to say is that mine has a kind of natural way to bundle all of the inner products together in a way that I think seems kind of natural. I didn’t see an obvious way to do the same with the standard inner product, but there might very well be.

Now I am confused. If in the definition of L nL^n which you gave here we just remove all the volume factors, don’t we get the L nL^n version of the ‘standard’ inner product?

Then you write:

(1)[Lα,Lβ]=[α,β]. [L\alpha,L\beta] = [\alpha,\beta] \,.

Is there supposed to be an LL in fron of the right hand side? I suppose that’s what you mean. But - I apologize in advance for saying this, recall that I am very tired in a way ;-) - still, I don’t quite see it.

Maybe it would help if you could write this equation out in detail, perhaps I have some wrong definition in mind.

I hope I am not trying your patience.

Not at all, really. It was my fault that I didn’t comment earlier on all the ideas that you had over the weekend! :-)

I will try to find references (on my own, maybe Szabo or something) and look this up

Oh, don’t bother. This is easily explained in a line or two.

The key is that if the adjointness relations

(2)(i (μ,σ)) =i (μ,σ) ( -i \partial_{(\mu,\sigma)})^\dagger = -i \partial_{(\mu,\sigma)}

and

(3)(X (μ,σ)) =X (μ,σ) (X^{(\mu,\sigma)})^\dagger = X^{(\mu,\sigma)}

hold, then the objects

(4)a n μ:=12π 0 2π(iη μν (ν,σ)+X (μ,σ))e inσdσ, a^\mu_n := \frac{1}{\sqrt{2\pi}} \int_0^{2\pi} \left( -i \eta^{\mu\nu}\partial_{(\nu,\sigma)} + X^{\prime (\mu,\sigma)} \right) e^{in\sigma} \, d\sigma \,,

which satisfy the generalized oscillator algebra

(5)[a n μ,a m ν]=nη μνδ n,m [a_n^\mu,a_m^\nu] = n \eta^{\mu\nu}\delta_{n,-m}

(or at least they would if I got the signs right…),

which identifies a na_n as an annihilation operator for n>0n\gt 0 and as a creation operator for n<0n \lt0 satisfy the Fock space requirement that we would like to have creators and annihilators to be mutually adjoint. And indeed, this is what happens due to the above adjointness relations of \partial and XX:

(6)(a n μ) =a n μ. (a_n^\mu)^\dagger = a_{-n}^\mu \,.

That’s all there is to it, essentially.

Posted by: Urs Schreiber on July 20, 2004 2:52 AM | Permalink | PGP Sig | Reply to this

Re: Inner Products

Now I see that you got to spend a great day talking with t’Hooft

Oops! ’t Hooft :)

It took more time than I’d like to admit to get that apostrophe to come out right in html :)

Then you write:

(1)[Lα,Lβ]=[α,β]. [L\alpha,L\beta]=[\alpha,\beta].

Is there supposed to be an LL in fron of the right hand side? I suppose that’s what you mean. But - I apologize in advance for saying this, recall that I am very tired in a way ;-) - still, I don’t quite see it.

You know, my wife is an accountant. She is so perfect with money. Not only that, she is always perfect about making sure all the lights are out when we leave the house. Once in a while, I leave a light on and seeing that light on as we pull into the driveway instills a certain amount of fear. I’m guilty! :O However, ever so rarely, she will herself leave a light on. For some reason, when this happens it fills me with a great feeling of joy :)

Granted, it is 3am for you, but your statement filled me with a similar sense of joy. Et tu Urs! :)

All I need to do is remind you that both sides are global inner products :)

The rest of what you say about oscillator algebras is very interesting, but I am too tired at the moment to comment or else I will surely leave a light on somewhere :)

Good night!

Eric

Posted by: Eric on July 20, 2004 4:34 AM | Permalink | Reply to this

Re: Inner Products

Et tu Urs! :)

All I need to do is remind you that both sides are global inner products :)

Aha!

Just leave a light on for me… ;-)

Sorry for being dense. But I am going to switch a still more lights on, because apparently I am still in the dark:

When you write (LαLβ)Lvol=L[(αβ)vol](L \alpha \cdot L\beta)L \mathrm{vol} = L[(\alpha\cdot \beta)\mathrm{vol}], why does that work? I think I see why (LαLβ)=L(αβ)(L \alpha \cdot L\beta) = L(\alpha \cdot \beta), but I don’t see why in general L(AB)=(LA)(LB)L(AB) = (LA)(LB). L is just the operation of integrating some object over the loop, right?

Posted by: Urs Schreiber on July 20, 2004 9:36 AM | Permalink | PGP Sig | Reply to this

Re: Inner Products

Good morning! :)

A quick note before I’m off to work…

When you write (LαLβ)Lvol=L[(αβ)vol](L\alpha\cdot L\beta)L\vol =L[(\alpha\cdot\beta)\vol], why does that work? I think I see why (LαLβ)=L(αβ)(L\alpha\cdot L\beta)=L(\alpha\cdot\beta), but I don’t see why in general L(AB)=(LA)(LB)L(AB)=(LA)(LB). LL is just the operation of integrating some object over the loop, right?

I could be wrong, but I explained this here.

Basically, αβ\alpha\cdot\beta is a 0-form and vol\vol is an nn-form and (αβ)vol(\alpha\cdot\beta)\vol is really (αβ)vol(\alpha\cdot\beta)\wedge\vol and LL is an algebra morphism (by definition). I defined wedge product on ()\mathcal{L}(\mathcal{M}) in terms of wedge product on \mathcal{M}. Maybe that is a bad move though. It feels right :) Besides, I used that and the fact that LL commutes with the contraction map to derive L(αβ)=(Lα)(Lβ)L(\alpha\cdot\beta) = (L\alpha)\cdot(L\beta), so it might actually be correct :)

Eric

Posted by: Eric on July 20, 2004 12:42 PM | Permalink | Reply to this

Re: Inner Products

It took more time than I’d like to admit to get that apostrophe to come out right in html :)

Ah, thanks for pointing out that the apostrophe should be encoded as &#8217;. I’ll change that in my entry.

Posted by: Urs Schreiber on July 20, 2004 11:27 AM | Permalink | PGP Sig | Reply to this

Re: Inner Products

One quick note…

Maybe it would help if you could write this equation out in detail, perhaps I have some wrong definition in mind.

See here.

Eric

Posted by: Eric on July 20, 2004 4:41 AM | Permalink | Reply to this

Re: Inner Products

Hello,

This might sound a little silly, but how are we defining the Dirac delta? Doesn’t it require vol\vol, i.e. isn’t it

(1) δ pfvol=f(p) \int_{\mathcal{M}} \delta_p f\vol = f(p)

where ff is a 0-form and δ p\delta_p is the Dirac delta? I’m pretty sure it does require vol\vol or else there would be some ambiguity under reparameterization, e.g. consider a case where vol=2dσ\vol = 2d\sigma so that

(2) δ pfvol= δ p(2f)(12vol)= δ p(2f)dσ. \int_{\mathcal{M}} \delta_p f\vol = \int_{\mathcal{M}} \delta_p (2f)(\frac{1}{2}\vol) = \int_{\mathcal{M}} \delta_p (2f) d\sigma.

Should this be f(p)f(p) or 2f(p)2f(p)?

Put in another way, let’s say we have a curve γ:[0,2π]\gamma:[0,2\pi]\to\mathcal{M} and a 1-form α\alpha. We can write down

(3) γα= [0,2π]γ *α. \int_\gamma \alpha = \int_{[0,2\pi]} \gamma^*\alpha.

Now, let’s say that pp is a point on γ\gamma with p=γ(1)p = \gamma(1). We can also write down

(4) γδ pα= [0,2π]γ *(δ p)γ *(α)= [0,2π]δ 1γ *α. \int_\gamma \delta_p \alpha = \int_{[0,2\pi]} \gamma^*(\delta_p) \gamma^*(\alpha) = \int_{[0,2\pi]} \delta_1 \gamma^*\alpha.

What is the result going to be?

To answer, it seems we need to express γ *α\gamma^*\alpha in terms of some basis. Let γ\gamma be parameterized by ss, i.e. γ(s)\gamma(s)\in\mathcal{M} with s[0,2π]s\in[0,2\pi]. Let ϕ,ψ:[0,2π]R\phi,\psi: [0,2\pi]\to R such that

(5)γ *(α)=ϕds=ψvol, \gamma^*(\alpha) = \phi ds = \psi \vol,

where vol\vol is the volume form obtained by pulling back the metric gg on \mathcal{M} to [0,2π]R[0,2\pi]\subset R. When you evaluate the integral

(6) γδ pα= [0,2π]δ 1γ *α= [0,2π]δ 1ϕds= [0,2π]δ 1ψvol, \int_\gamma \delta_p \alpha = \int_{[0,2\pi]} \delta_1 \gamma^*\alpha = \int_{[0,2\pi]} \delta_1 \phi ds = \int_{[0,2\pi]} \delta_1 \psi \vol,

is the result going to be ϕ(1)\phi(1) or ψ(1)\psi(1)? The answer to this is important I think. At least it is important to me if I am ever going to understand this stuff.

According to your notes, it seems you would say the integral evaluates to ϕ(1)\phi(1). If so, then you are secretly assuming that dsds is a unit 1-form, i.e. your volume form under some metric. Another possibility (I thought of just before submitting) is that you are choosing some parameter specific normalization for the Dirac delta, which would seem weird. These conclusions are based on examining Equation (2.6) and similar ones in your paper. If the answer is ψ(1)\psi(1), which I think it should be, then the Dirac delta is defined with respect to the induced volume form. This seems more natural to me.

What do you think?

I hope I can communicate how important I think this is. I can imagine that you could look at this and brush it off as being too trivial to worry about. But I worry about these things! :) I hope you can put some thought into it too if for no other reason than to put my mind at ease :)

Eric

Posted by: Eric on July 20, 2004 9:46 PM | Permalink | Reply to this

Re: Inner Products

Hi Eric -

it seems that you want to define a metric on the interval (0,2π)(0,2\pi) and derive a measure from that. But this is not what I have in mind.

I just define, by fiat, a measure on (0,2π)(0,2\pi), namely the standard one denoted by dσd\sigma, which assigns ordinary parameter interval length to each open subset of (0,2π)(0,2\pi).

The δ\delta-distributions that appear are defined with respect to this measure dσd\sigma.

If you want to define other measures, like vol=2dσ\mathrm{vol} = 2 d\sigma then of course you need to specify what a given δ\delta-sign is supposed to denote. But if we take all δ\deltas to be defined with respect to dσd\sigma then δ p(f)dσ=f(p)\int \delta_p(f) \, d\sigma = f(p), by definition.

Now consider your example of a 1-form α\alpha on target space which is of the form α(x)=δ (x)α μdx μ\alpha(x) = \delta_{\mathcal{M}}(x)\alpha_\mu dx^\mu. In writing that down, we need to specify which δ\delta we mean here. Let me assume it is the δ\delta with respect to the measure d Dxd^D x, for the same coordinates xx that appear in the definition of α\alpha above. In other words, we have δ (x)=1(2π) Dd Dke ikx\delta_{\mathcal{M}}(x) = \frac{1}{(2\pi)^D}\int d^D k e^{ik\cdot x}.

Now if we pull that back to a given loop X:(0,2π)X : (0,2\pi) \to \mathcal{M} and integrate with the measure dσd\sigma we can apply the usual rules to obtain

(1) 0 2πX *(α)= 0 2πdσδ (X(σ))α μX μ(σ). \int_0^{2\pi} X^*(\alpha) = \int_0^{2\pi} d\sigma\, \delta_{\mathcal{M}}(X(\sigma)) \alpha_\mu X^{\prime\mu}(\sigma) \,.

Let’s assume for brevity of the discussion that X(σ)X(\sigma) takes the value X(σ)=0X(\sigma) = 0 only once and is invertible in the vicinity of that point.

Moreover, rigidly rotate the coordinate system on the target so that the coordinate line of x 1x^1 is parallel to X X^\prime at that point and that all other coordinates are orthogonal.

Write δ (x)=δ (x 1) i>1δ (x i)\delta_{\mathcal{M}}(\vec x) = \delta_{\mathcal{M}}(x^1)\prod_{i\gt 1}\delta_{\mathcal{M}}(x^i).

- From now on the factor i>1δ (x i)\prod_{i\gt 1}\delta_{\mathcal{M}}(x^i) will just be a spectator.

We can now go from δ \delta_{\mathcal{M}} to the δ\delta on (0,2π)(0,2\pi) by writing

(2)dσδ (X 1(σ))=dσ˜((X 1) 1) (σ˜)δ (σ˜)=dσ˜((X 1) 1) (0)δ (σ˜) d\sigma\, \delta_{\mathcal{M}}(X^1(\sigma)) = d\tilde \sigma ((X^1)^{-1})^\prime(\tilde \sigma) \delta_{\mathcal{M}}(\tilde \sigma) = d\tilde \sigma ((X^1)^{-1})^\prime(0) \delta_{\mathcal{M}}(\tilde \sigma)

for σ=(X 1) 1(σ˜)\sigma = (X^1)^{-1}(\tilde \sigma) in the vicinity of that point.

Now we perform the integal and find that

(3) 0 2πX *(α)=((X 1) 1) (0)α μX μ(X 1(0)) i>1δ (X i(X 1(0))). \int_0^{2\pi} X^*(\alpha) = ((X^1)^{-1})^\prime(0) \alpha_\mu X^{\prime \mu}(X^{-1}(0)) \prod_{i\gt1} \delta_{\mathcal{M}}(X^i(X^{-1}(0))) \,.

That’s it in gory detail. I think that even if I messed up somewhere the message is that the integration over (0,2π)(0,2\pi) is well defined and you can compute everything by taking care of how all the objects are defined.

Posted by: Urs Schreiber on July 21, 2004 11:07 AM | Permalink | PGP Sig | Reply to this

Re: Inner Products

Hi Urs,

Thank you for taking the time to explain that. At least now I can follow what you are saying :)

On the other hand, let me express a certain dissatisfaction for this approach and you can either agree or disagree, but I doubt I will be able to convince you to change (because there might not be a compelling reason to). That is ok as long as we understand each other :)

Here is why I do not like what you said…

Consider a smooth map

(1)γ:S 1 \gamma: S^1\to\mathcal{M}

and two distinct diffeomorphisms

(2)σ,σ:[0,2π]S 1. \sigma,\sigma': [0,2\pi]\to S^1.

It seems that the only way we can define an unambiguous, parameterization invariant Dirac delta is if we have a volume form vol\vol. If you have a volume form, you can define an unambiguous, coordinate independent Dirac delta via

(3) δ pfvol=f(p). \int_{\mathcal{M}} \delta_p f \vol = f(p).

Any other choice is going to be coordinate dependent.

Now let’s define

(4)γ *α=fvol, \gamma^* \alpha = f \vol,
(5)σ *(γ *α)=ϕdσ, \sigma^*(\gamma^*\alpha) = \phi d\sigma,

and

(6)σ *(γ *α)=ϕdσ. \sigma'^*(\gamma^*\alpha) = \phi' d\sigma'.

where ff is a 0-form on S 1S^1 and ϕ,ϕ\phi,\phi' are 0-forms on [0,2π][0,2\pi]. Also let

(7)p=γ(q) p = \gamma(q)

and σ,σ\sigma,\sigma' be parameterizations such that

(8)p=γσ(a)=γσ(a). p = \gamma\circ\sigma(a) = \gamma\circ\sigma'(a').

We can now define three distinct δ\delta-functions

(9)(1) γδ pα= S 1δ qfvol=f(q), (1)\quad\int_\gamma \delta_p \alpha = \int_{S^1} \delta_q f \vol = f(q),
(10)(2) γδ p σα= 0 2πδ aϕdσ=ϕ(a), (2)\quad\int_\gamma \delta^\sigma_p \alpha = \int_0^{2\pi} \delta_a \phi d\sigma = \phi(a),
(11)(3) γδ p σα= 0 2πδ aϕdσ=ϕ(a). (3)\quad\int_\gamma \delta^{\sigma'}_p \alpha = \int_0^{2\pi} \delta_{a'} \phi' d\sigma' = \phi'(a').

Unless I am making a blunder, which is always possible, we will have in general

(12)f(q)ϕ(a)ϕ(a). f(q) \ne \phi(a) \ne \phi'(a').

Your response suggests, and maybe I am reading too much into it, that string theorists use a coordinate dependent δ\delta-function in their formulations. They end up with coordinate dependent expressions and then, at the end of it all, you look for things that are coordinate independent, because those are of course the only meaningful things to look at.

But why?!?!??!

This is like saying you want to work with coordinate dependent maps between manifolds in standard geometry and then at the end try to mod out by coordinate transformations to reduce to things that are coordinate independent. This sounds just silly to me, especially when there are tools in place that are coordinate independent already.

If you do not use the covariant δ\delta-function, then I can understand why you need to consider loop space to be that of parameterized loops and why you need to consider two different parameterizations of the same loop as different points in loop space. This seems to be solely due to a bad choice of δ\delta-function, which propagates through to all loop space operators. It seems to me, but understand I am keenly aware I could be wrong and almost expect to be wrong, that if you defined you operators on loop space using the covariant δ\delta-function, then you could work directly with unparameterized (but parameterizable) loops. In other words, of course you may need to express things in coordinates to help you with algebra, but whatever you write down will be independent of the parameterization you choose. The way you have formulated things in your notes, different parameterizations give inequivalent objects because the δ\delta function buried in the operators is coordinate dependent.

Please forgive me if I am wrong, but this seems to be absolutely fundamental to formulating differential geometry on loop space. How you define your δ\delta-function may seem like an innocent enough thing, but it has repurcussions that resonate throughout the entire formulation. It makes the difference between having to deal with parameterization dependent expressions or parameterization independent expressions.

To be even more bold, while still remembering that I could be completely wrong, I claim that if you were to reformulate things with the covariant δ\delta-function, then d Kd_K would ALWAYS be nilpotent because K\mathcal{L}_K would be zero be construction. Now, you end up with K\mathcal{L}_K not always being zero because your expressions are parameterization dependent. I can’t help but think things would be simpler if you had d K 2=0d_K^2 = 0 on the nose for all fields on loop space.

Gotta run to work, but I hope I am getting my point across.

Best wishes,
Eric

Posted by: Eric on July 21, 2004 1:22 PM | Permalink | Reply to this

The need for parameterized loops

Yes, I fully agee with what you say. Indeed, the string/loop is parameterized and the same loop with different parameterization is taken to be a different object.

Maybe it would be more suggestive to use the terminology map space or more precisely space of periodic maps on (0,2π)(0,2\pi) instead of parameterized loop space.

Now why don’t we just quotient out by reparametrizations and go to the true unparameterized loop space, on which indeed K=0\mathcal{L}_K = 0?

The reason is that the general state of the quantum string is a wave function on parameterized loop space which does not necessarily take the same value on any two maps which only differ by a reparameterization.

Most prominently, the ground state of the string is not reparameterization invariant.

Why that?

Recall the long disucssion about the LQG string. There such a complete rep invariance was indeed built in by hand and in such a context one could certainly do what you have in mind, namely take the config space of the string to be the space of unparameterized loops, which automatically ‘solves’ all the reparameterization coinstraints, and then on that space only impose the remaining Hamiltonian constraint.

In fact, this is precisely what is done in LQG: The reparameterization constraints are ‘solved’ by explicitly constructing states that do not depend on the parameterization. (Recall that we can think of the string as 1+1 dimensional gravity coupled to scalar fields on the worldsheet.)

So what you have in mind here is a LQG-like quantization of the string.

Now, as we discussed at great length before, there is a reason why this is a dubious procedure, even though it may look so extremely natural, that apparently nobody had noticed that there is a subtlety until we began to discuss the LQG-string paper.

The subtlety is that, while it is true classicall that everything is perfectly reparameterization invariant, the quantization of the constraints tells us a different picture. There one finds that, due to the anomaly, not all of the constraints as they follow from canonical quantization can be imposed at once! Only half of them can.

Two remarks on that:

1) It is important that I am talking about canonical quantization. The LQG people are thinking of a relaxation of the ordinary recipe of canonical quantization, and only this relaxation allows to have fully rep invariant ‘quantum’ states.

2) The fact that a state is not rep invariant may sound strange, but when one remembers that what counts in quantum theory is not so much a given state but rather the expectation values computed from a state, it is plausible that we should only demand that the expectation value of the reparameterization constraint operators vanish. And this is still true if a single state is only annihilated by half of the constraints. Similarly, all amplitudes computed on the worldsheet are independent of the parameterization. And this is what counts.

So at this point it may sound counterintuitive that parameterization still plays a role in a sense in doing the quantum theory. But it is demanded by ordinary quantum formalism - and it all works out consistently.

In the end it can be best understood from path integral language. In order to evaluate the string’s path integral we have to compute the string’s action for several configurations. This action is diff invariant, but for purely practical reasons it is necessary to introduce coordinates merely to compute it. That’s how they enter. They don’t affect expectation values and amplitudes in the end, but other formal objects, that don’t directly describe something physically observable, they may well show up.

On the other hand, I have noted in my recent paper that loop space formalism is particularly natural if we are working with boundary states. This are states of the closed string which are indeed annihilated by all the reparameterization constraints and so these truly live on unparameterized loop space. (It follows that they are not annihilated by the Hamiltonin constraint of the string. These boundary states are ‘off-shell’ in a sense. That’s not too surprising since they represent background configurations different from empty flat spacetime, and so they are not on-shell with respect to the constraints of empty flat spacetime.)

So as far as boundary states are concerned we could in principle indeed do as you propose and go to the space which is obtained from parameterized loop space by identifying all loop-maps which only differ by a parameterization. On this space d Kd_K is indeed nilpotent.

I am currently working on a paper where I will make some use of this perspective and I will disucss how one can construnct ‘boundary DDF invariants’ and ‘boundary Pohlmeyer invariants’ which are objects that are well defined also on the unparameterized loop space, because they take rep invariant states to rep-invariant states.

As soon as I have some rudiments finished I’ll put a pdf about that on the web.

Posted by: Urs Schreiber on July 21, 2004 5:34 PM | Permalink | PGP Sig | Reply to this

Re: The need for parameterized loops

Hi Urs,

Thanks again for your help.

I know you are busy, but I was hoping that maybe you can give me a quick homework assignment? If I am able to complete the assignment, then maybe I might have something interesting to say.

From your last response, I hear you saying that the individual states are parameterization dependent, but that any expectation values you compute are independent of parameterization so it is ok. It seems like the “finish line” is an expectation value. The “starting line” is an assumption that strings are somehow important. So we have a path

(1)"strings""expectation values". \text{"strings"} \quad\longrightarrow\quad \text{"expectation values"}.

Could you give me an example of an expectation value computation involving strings that is simple enough that I might be able to follow, but nontrivial enough that I can see how the basic mechanics of computing expectation values in general works? A pointer to a worked out example would do if you are pressed for time.

My homework assignment will be to reproduce this calculution using an entirely paramaterization independent formalism from start to finish.

A string is independent of parameterization (or I believe it should be) and the expectation value is parameterization independent. In carrying out the computation, I see that it might help to introduce parameters, but I claim that parameters can be introduced in a way such that each step along the way is independent of the parameters chosen.

I don’t question that following the standard approach can get you from start to finish. I just question the efficiency of the path in getting you there. My experience as an engineer has taught me that often the most beautiful solution is also the most efficient one, i.e. beauty and efficiency go hand in hand. Therefore, I care about efficiency almost as much as I care about beauty :)

Besides, I’ve always tried to work with a guiding principle:

You can’t complain about something unless you offer up something better as an alternative.

I want to stop complaining about the way the standard approach works and attempt to offer something better instead. I fully expect to arrive at the same answer, but I hope to demonstrate that there is a better way to do things.

Eric

Posted by: Eric on July 21, 2004 7:39 PM | Permalink | Reply to this

Re: The need for parameterized loops

The simplest thing is to consider the following:

The generator of arbitrary reparameterization is K(σ)=X μ(σ)δδX μ(σ)+fermionicterms\mathcal{L}_K(\sigma) = X^{\prime \mu}(\sigma)\frac{\delta}{\delta X^\mu(\sigma)} + {fermionic terms}, where we can ignore the fermionic terms (form creators/annihilators) for the moment. (You can find the full expression in equation (3.5) of hep-th/0401175). It is very convenient to take linear combinations for different σ\sigma by making a Fourier transformation:

(1) K,n:= 0 2πdσe inσX μ(σ) (μ,σ)+fermionicterms. \mathcal{L}_{K,n} := \int_0^{2\pi} d\sigma\, e^{in\sigma} X^{\prime \mu}(\sigma) \partial_{(\mu,\sigma)} + {fermionic terms} \,.

As we have discussed before, taking adjoints gives us (X μ(σ) (μ,σ)) =X μ(σ) (μ,σ)(X^{\prime \mu}(\sigma)\partial_{(\mu,\sigma)})^\dagger = - X^{\prime \mu}(\sigma)\partial_{(\mu,\sigma)}. This implies directly that

(2)( K,n) = K,n. (\mathcal{L}_{K,n})^\dagger = \mathcal{L}_{K,-n} \,.

A state |ψ|\psi\rangle (a function on parameterized loop space) is reparameterization invariant if it is annihilated by all the K,n\mathcal{L}_{K,n}. But say we want to impose at most

(3) K,n|ψ=0n0 \mathcal{L}_{K,n}|\psi\rangle = 0 \,\,\,\, \forall\, n \geq 0

for all positive integer nn instead of for all integer nn. Then, still, the expectation value of all K,n\mathcal{L}_{K,n} vanishes, because

(4)ψ| K,nψ= K,nψ|ψ. \langle\psi| \mathcal{L}_{K,-n} \psi \rangle = \langle\mathcal{L}_{K,n} \psi |\psi \rangle \,.
Posted by: Urs Schreiber on July 21, 2004 8:02 PM | Permalink | Reply to this

Covariant Formulation of Loop Space

On a manifold MM with metric gg, let KK be a vector field,

(1)γ:S 1 \gamma:S^1\to\mathcal{M}

a loop, and

(2)σ:[0,2π]S 1 \sigma: [0,2\pi]\to S^1

a parameterization of γ\gamma such that

(3)K| γσ(s)=Tddσ| γσ(s) \left. K\right|_{\gamma\circ\sigma(s)} = T \left. \frac{d}{d\sigma} \right|_{\gamma\circ\sigma(s)}

for all s[0,2π]s\in[0,2\pi], where TT is a constant. The flow

(4)ϕ s: \phi_s:\mathcal{M}\to\mathcal{M}

generated by KK will carry a point around the loop. Let

(5)p=γσ(0) p = \gamma\circ\sigma(0)

and define

(6)p(s)=ϕ s(p). p(s) = \phi_s(p).

For any 1-form α\alpha it is clear that we always have

(7) γα= (ϕ s) *γα. \int_\gamma \alpha = \int_{(\phi_s)_* \gamma} \alpha.

Consequently, we have

(8) γ Kα=dds[ (ϕ s) *γα]| s=0=0 \int_\gamma \mathcal{L}_K \alpha = \frac{d}{ds} \left. [\int_{(\phi_s)_*\gamma} \alpha] \right|_{s = 0} = 0

for any loop γ\gamma on \mathcal{M}. Therefore, we can take

(9) Kα=0 \mathcal{L}_K \alpha = 0

as a statement that the integral of α\alpha around the loop γ\gamma is independent of the parameterization of the loop. On \mathcal{M}, this is obvious. However, we would like to define an unparameterized loop space for which the above expression is also manifest.

We will consider loop space to be simply the set of unparameterized (but parameterizable) maps

(10)γ:S 1. \gamma:S^1\to\mathcal{M}.

Given a pp-form α\alpha on \mathcal{M}, we obtain a pp-form on loop space via the loop map

(11)L:Ω p()Ω p(()) L:\Omega^p(\mathcal{M})\to\Omega^p(\mathcal{L}(\mathcal{M}))

defined by

(12)Lα= S 1γ *(α)vol, L\alpha = \int_{S^1} \gamma^*(\alpha) \vol,

where vol\vol is the induced volume form on S 1S^1 obtained by pulling back the metric gg from \mathcal{M} to S 1S^1 and γ\gamma is a, yet to be specified, loop. Note that because neither coordinates nor parameterizations were used to define the loop map that the loop map is trivially independent of coordinates and parameterizations.

Proposition: The loop map is 1-1.

Let γ p\gamma_p be a loop that is contractible to a point pp\in\mathcal{M}, then

(13)α| p==limγ ppLα| γ p|γ p|=limγ pp S 1γ p *(α)vol|γ p|, \left. \alpha\right|_p = = \underset{\gamma_p\to p}{\lim} \frac{\left. L\alpha\right|_{\gamma_p}}{|\gamma_p|} = \underset{\gamma_p\to p}{\lim} \frac{\int_{S^1} \gamma_p^*(\alpha) \vol}{|\gamma_p|},

where

(14)|γ p|= S 1vol |\gamma_p| = \int_{S^1} \vol

is the length of the loop.

We can define a tensor product on ()\mathcal{L}(\mathcal{M}) in terms of the tensor product on \mathcal{M} via

(15)LαLβ=L(αβ). L\alpha\otimes L\beta = L(\alpha\otimes\beta).

We can also define a contraction map on ()\mathcal{L}(\mathcal{M}) such that

(16)Lcontract=contractL. L\circ\contract = \contract\circ L.

Given a vector field XX on \mathcal{M}, we can use the above relations to define a vector field LXLX on ()\mathcal{L}(\mathcal{M}) via

(17)Lα,LX=Lα,X. \langle L\alpha, LX\rangle = L\langle \alpha, X\rangle.

Warning: I am running out of steam and it’s getting late. :)

Before I sign off, I’ll just state that everything I’ve said is equally valid for open curves as well as loop. I was also trying to build up to

(18)[Lα,Lβ]=[α,β]. [L\alpha,L\beta] = [\alpha,\beta].

Once I have this, I can start looking at expectation values

(19)F=[Lα,FLα][Lα,Lα]. \langle F\rangle = \frac{[L\alpha,FL\alpha]}{[L\alpha,L\alpha]}.

If I can get this far, I was going to show that

(20) LXLα=L( Xα) \mathcal{L}_{LX} L\alpha = L(\mathcal{L}_X \alpha)

so that

(21) LK Lα=[Lα, LKLα][Lα,Lα]=[Lα,L( Kα)][Lα,Lα]=[α, Kα][α,α]= K α=0 \langle \mathcal{L}_{LK} \rangle_{L\alpha} = \frac{[L\alpha,\mathcal{L}_{LK} L\alpha]}{[L\alpha,L\alpha]} = \frac{[L\alpha,L(\mathcal{L}_K \alpha)]}{[L\alpha,L\alpha]} = \frac{[\alpha,\mathcal{L}_K\alpha]}{[\alpha,\alpha]} = \langle \mathcal{L}_K \rangle_{\alpha} = 0

because

(22) Kα=0. \mathcal{L}_K\alpha = 0.

This would almost seem like a solution to my homework problem.

Good night! :)

Eric

Posted by: Eric on July 22, 2004 4:51 AM | Permalink | Reply to this

Re: Covariant Formulation of Loop Space

Hi Eric -

I have a couple of comments and questions:

You write:

On a manifold MM with metric gg , let KK be a vector field,

Do you really want KK to be defined on target space? Then you have to use different KK for different loops in order that KK restricted to a given loop really gives σ\partial_\sigma along that loop.

BTW, here is an important point which I may not have emphasized enough but which I mentioned in my last comment:

For something on the loop to be reparameterization invariant it is not sufficient for it to be annihilated by K\mathcal{L}_K. That’s because KK alone only generates rigid reparameterizations, i.e. those which send σσ+const\sigma \mapsto \sigma + {const}. In general we need σf(σ)\sigma \mapsto f(\sigma) and this is the reason for the use of the ‘modes’ of KK which I mentioned last time.

Next, I am not sure why you restrict to things like Kα=0\mathcal{L}_K \alpha = 0, where α\alpha is a form on target space. More generally we have objects on loop space that don’t directly come from target space.

Then, I have to apologize and to admit that I don’t fully understand the definition of LL. Usually when you pull back via γ *(α)\gamma^*(\alpha) a pp form α\alpha to a 1d manifold like a loop, the result vanishes for p>1p\gt 1. This seems to be something else than you have in mind.

Could you write out explicitly the action of LL that you have for example for a 2-form?

Posted by: Urs Schreiber on July 22, 2004 10:38 AM | Permalink | PGP Sig | Reply to this

Re: Covariant Formulation of Loop Space

Good morning :)

Do you really want KK to be defined on target space? Then you have to use different KK for different loops in order that KK restricted to a given loop really gives σ\partial_\sigma along that loop.

I think this is ok. Besides, for a single KK on \mathcal{M} we cover LOTS of loops. Granted, I understand why you wouldn’t like this.

In general we need σf(σ)\sigma\mapsto f(\sigma) and this is the reason for the use of the ‘modes’ of KK which I mentioned last time.

Ok. I’ll think about this, but I think what I have in mind also works for arbitrary f(σ)f(\sigma) without having to resort to modes.

Next, I am not sure why you restrict to things like Kα=0\mathcal{L}_K\alpha = 0, where α\alpha is a form on target space. More generally we have objects on loop space that don’t directly come from target space.

Are you convinced that we need forms on loop space that are not obtained from forms on target space? If so, what is it that has convinced you?

Physics is about explaining things that can be measured. Anything that can be measured must be expressible as a differential form on target space. If it cannot be expressible as a form on target space, it is not measureable. Hence, it is not physics unless it can be mapped to something on target space.

If I am wrong about this, please tell me because this is a cornerstone of everything I understand about physics :) It would be shattering for me to learn that even this is wrong :)

Then, I have to apologize and to admit that I don’t fully understand the definition of LL.

Why are you apologizing for my inability to write down something that makes sense? :)

You are absolutely correct. What I wrote is really only valid for 0-forms. I was being “cavalier” assuming that some such LL existed for higher degree forms. I am still fairly sure that one must exist, but I’ll have to put in a little more effort to give it a rigorous definition.

Gotta run!

Eric

Posted by: Eric on July 22, 2004 2:20 PM | Permalink | Reply to this

Lost in Space

Hi Urs,

Believe it or not, I’m not trying to cause trouble :) I’m just trying to understand this stuff. When I try as hard as I have to understand you deformation paper, I sometimes give up and try to reformulate things myself. Sometimes this works, but I don’t seem to be getting anywhere here.

I’m thinking about trying a simplified version of loop space, i.e. sphere space on R nR^n. In this model, a point in R n+1R^{n+1} corresponds to an oriented (n1)(n-1)-sphere in R nR^n. For some notation, let

(1)±S a(x 1,...,x n) \pm S_a(x^1,...,x^n)

denote an oriented (n1)(n-1)-sphere on R nR^n centered at (x 1,...,x n)R n(x^1,...,x^n)\in R^n having radius aa, where the ±\pm denotes opposite orientations.

Although it might not technically be a “map”, let me call 𝒮 *\mathcal{S}_* a map anyway from chains on R nR^n to chains on R n1R^{n-1} defined by

(2)𝒮 *(x 1,...,x n)=sign(x n)S |x n|(x 1,...,x n1). \mathcal{S}_*(x^1,...,x^n) = \sign(x^n) S_{|x^n|}(x^1,...,x^{n-1}).

In particular, we can consider “circle space on R 2R^2”, i.e. every point (x,y,z)R 3(x,y,z)\in R^3 corresponds to an oriented circle

(3)𝒮(x,y,z)=sign(z)S |z|(x,y) \mathcal{S}(x,y,z) = \sign(z) S_{|z|}(x,y)

in R 2R^2. This toy model seems to capture some of the essence of what is going on in loop space, but because oriented circles in the plane can be completely described by three real numbers, then circle space is only three dimensional. My poor brain might actually be able to comprehend that.

What do you think? Could this model be helpful?

For the time being, I will assume it is a valid toy model of loop space and develop some ideas.

Given a 1-form α\alpha on R 2R^2 and a point pR 3p\in R^3, we can define a 0-form 𝒮 *(α)\mathcal{S}^*(\alpha) on R^3 via

(4) p𝒮 *(α)= 𝒮 *(p)α. \int_p \mathcal{S}^*(\alpha) = \int_{\mathcal{S}_*(p)} \alpha.

This is a generalized notion of push forward and pull back.

Things get a little interesting if we look at an oriented curve γ\gamma in R 3R^3. Fortunately, my previous disussion of the extrusion map H X(t)H_X(t) helps here. An oriented curve γ\gamma in R 3R^3 corresponds to extruding a circle around on R 2R^2 with orientation defined as I defined it for the extrusion map. Therefore,

(5)𝒮 *(γ) \mathcal{S}_*(\gamma)

is a valid 2-chain on R 2R^2. Now, given a 1-chain γ\gamma on R 3R^3 and a 2-form β\beta on R 2R^2, we obtain a 1-form 𝒮 *(β)\mathcal{S}^*(\beta) on R 3R^3 via

(6) γ𝒮 *(β)= 𝒮 *(γ)β. \int_{\gamma} \mathcal{S}^*(\beta) = \int_{\mathcal{S}_*(\gamma)} \beta.

After doodling some special cases, I’ve almost convinced myself that 𝒮 *\mathcal{S}_* is natural in the sense that

(7)𝒮 *=𝒮 *, \partial\circ\mathcal{S}_* = \mathcal{S}_*\circ\partial,

which, if true, implies

(8)d𝒮 *=𝒮 *d. d\circ\mathcal{S}^* = \mathcal{S}^*\circ d.

I think there is a decent chance that I haven’t made any serious blunders here, but before proceeding I’ll wait for your blessing. Do you think this is a valid toy model for loop space? if so, then I might be on the way to gaining some understanding.

The first lesson: we need to generalize our notions of push forward and pull back.

But this seems not too difficult.

Anyway…

Good night! :)

Eric

Posted by: Eric on July 23, 2004 4:12 AM | Permalink | Reply to this

Re: Lost in Space

Hi Eric -

yes, I think this is a sensible toy model and that your constructions do make sense.

(Maybe an even better toy model would be some ‘nn-gon space’, i.e. the space of nn-tuples of points in target space. When taking nn \to \infty with a suitable continuity condition this flows to loop space.)

Concerning that ‘naturalness’ condition. For loop space this is the content of the little theorem that I recall in equation (3.8) of hep-th/0407122. In general I think that what you are thinking about goes in the same direction as the stuff mentioned in that section 3.1.

Posted by: Urs Schreiber on July 23, 2004 12:01 PM | Permalink | PGP Sig | Reply to this

Re: Lost in Space

Hi Urs,

I was just looking at hep-th/0407122 and noticed something that I had seen before, but never mentioned it.

I don’t know if it is deep or a coincidence, but Equation (2.6) is essentially the Ito formula from stochastic calculus. At least if you squint your eyes enough :)

Eric

Posted by: Eric on July 23, 2004 8:24 PM | Permalink | Reply to this

Re: Lost in Space

To me it rather looks like a generalized flatness condition on some curvature. When ignoring the mode indices for a moment we are just dealing with a structure very similar to what we talked about in the context of string field theory:

There is some odd graded operator Q=d=whateversymbolyoulikeQ=d={whatever symbol you like} which squares to something. Now we want to deform it by adding some AA so that it still squares to that same thing, i.e. (d+A) 2=d 2(d+A)^2 = d^2. The condition on AA is hence dA+AA=0dA + AA = 0. If dd were just the ordinary exterior derivative this would simply say that AA must be a flat connection.

The question that I hint at by saying ‘One large class of solution of this equation’ is whether one could find AA which don’t come from a ‘gauge’ transformation A=U (dU)A = U^\dagger (d U) of the trivial connection. I.e. are there large transformation not continuously connected to the identity.

Probably there are and deforming the superconformal algebra by them might yield something interesting. But so far I haven’t come across any. On the other hand, I haven’t checked if there is a AA which gives the transformation (4.33) in hep-th/0401175.

Posted by: Urs Schreiber on July 24, 2004 12:25 AM | Permalink | PGP Sig | Reply to this

n-Gon Space

Hi Urs,

I was taking your advice and looking at nn-gon space. To make things as simple as possible, I started with something that is not really an nn-gon, but I thought I could still learn something from. I started with “segment space,” i.e. the space of all straight line segments in R nR^n. I started turning the crank on things like push forward and pull back when I found (what is probably obvious) that in order to have these natural relations

(1)S *=S *, S_*\circ\partial = \partial\circ S_*,

for a point pp in segment space, we must always have (S *p)=0\partial (S_* p) = 0, i.e. the image of a point in segment space needs to be closed in target space, because we will always have S *(p)=0S_*(\partial p) = 0. Since a straight line segment is not closed, things don’t really have a chance to work out as far as I can see.

The point is, whatever toy model we come up with, a point in the “higher” space must map to a closed object in target space. A downside of this is that you cannot pullback a coordinate basis on target space to a form on nn-gon space because all closed forms pull back to zero.

My next toy model might be “triangle space” although it is tempting to look at “point space” :)

A thought just before submitting…

Wait a minute! Isn’t it true that

(2)S *(αβ)=(S *α)(S *β) S^*(\alpha\wedge\beta) = (S^*\alpha)\wedge(S^*\beta)

and

(3)S *(α+β)=(S *α)+(S *β) S^*(\alpha + \beta) = (S^*\alpha) + (S^*\beta)

for forms α,β\alpha,\beta on target space?

Since any form can be expresses as

(4)α= iϕ idβ i, \alpha = \sum_i \phi_i d\beta_i,

where ϕ i\phi_i are 0-forms, then if this is true about pull back we have

(5)S *α= iS *(ϕ i)S *(dβ i)=0 S^*\alpha = \sum_i S^*(\phi_i) S^*(d\beta_i) = 0

because

(6)S *(dβ i)=0. S^*(d\beta_i) = 0.

This would mean that there are no forms that pull back from target space to a non-zero form on the higher space. The only way out is if I am mistaken about the pull back map (which is possible, but I doubt) or the operations are not natural.

Does this mean I am back to the drawing board? :|

Eric

Posted by: Eric on July 24, 2004 2:29 PM | Permalink | Reply to this

Re: Lost in Space

Hi Urs,

I think my latest train of thought is going nowhere (surprise). I think I should probably return to your previous question

Usually when you pull back via γ *(α)\gamma^* (\alpha) a pp-form α\alpha to a 1d manifold like a loop, the result vanishes for pp > 1. This seems to be something else than you have in mind.

Could you write out explicitly the action of LL that you have for example for a 2-form?

Let’s assume we will be working with nn-loop space n()\mathcal{L}^n(\mathcal{M}), so consider the nn-torus

(1)T n=S 1××S 1n times T^n = \underset{n\text{ times}}{\underbrace{S^1\times\cdots\times S^1}}

and a map

(2)γ:T p \gamma: T^p\to\mathcal{M}

so that we have

(3) γα= S 1××S 1γ *α \int_\gamma \alpha = \int_{S^1\times\cdots\times S^1} \gamma^*\alpha

for some pp-form α\alpha. For the time being, consider the case where p=2p = 2.

What I am looking for is related to Fubini’s theorem. I am looking for some map γ \gamma^\bullet satisfying

(4) S 1×S 1γ *α= S 1 S 1γ α. \int_{S^1\times S^1} \gamma^*\alpha = \int_{S^1} \int_{S^1} \gamma^\bullet \alpha.

In other words, γ α\gamma^\bullet\alpha is something that when integrated over S 1S^1 results in a 1-form. When the resulting 1-form is integrated over S 1S^1, the result is the same as if you integrated the 2-form γ *α\gamma^*\alpha over S 1×S 1S^1\times S^1. Does such a thing exist? It doesn’t seem to be that crazy of a thing, so maybe it can be made to make sense. Any ideas?

Eric

Posted by: Eric on July 24, 2004 5:38 PM | Permalink | Reply to this

Re: Lost in Space

Hi Eric -

Does such a thing exist?

Yes it does. Now we are converging! :-) This is what is used in that section 3.1 of hep-th/0407122.

Pick a 2-form B=12B μνdx μdx νB = \frac{1}{2}B_{\mu\nu}dx^\mu \wedge dx^\nu on target space. ‘Pull it back on 1 index’ to the loop X=X(σ)X = X(\sigma) to obtain dσ μ(σ)B μνX ν(σ)\int d\sigma\,\mathcal{E}^{\dagger \mu}(\sigma)B_{\mu\nu}X^{\prime\nu}(\sigma)

(as Peter Woit has indicated here this can be expressed in more mathematical terms, but I won’t do that here) to obtain a 1-form on loop space, which, when integrated over a loop on loop space (parameterized by τ\tau) gives the integral of the pullback of BB over the surface swept out by the loop-loop.

My apologies if my notation and wording is bad. You could alternatively have a look at equations (1.4)/(3.20) in Ferreira et al.’s hep-th/9710147. Even though they use essentialyy the same notation, maybe it helps to see it discussed in somewhat different words. (Note that for the moment you can just think of all the WWs appearing in that paper as being the unit element and just ignore it.)

Posted by: Urs Schreiber on July 25, 2004 3:55 PM | Permalink | PGP Sig | Reply to this

Still Trying

Hi Urs,

Besides obvious probable causes, e.g. too few brain cells, I don’t know why I am struggling so hard to understand differential geometry on loop space.

In my first attempts at interpretting your deformation paper, I introduced this map LL. At the time, aside from a volume form, I didn’t think I was doing anything new, but basically rewriting what you did in a coordinate-free manner.

In your paper, you have the expression

(1)UV| X= 0 2πdσg(X(σ))(U(σ),V(σ)). \left. U\cdot V\right|_X = \int_0^{2\pi} d\sigma g(X(\sigma))(U(\sigma),V(\sigma)).

Just for clarity, could we write this as

(2)UV| X= 0 2πdσg(X(σ))(U(X(σ)),V(X(σ)))? \left. U\cdot V\right|_X = \int_0^{2\pi} d\sigma g(X(\sigma))(U(X(\sigma)),V(X(\sigma)))?

If so, could you write this as

(3)UV| X= 0 2πdσg(U,V)| X(σ)? \left. U\cdot V\right|_X = \int_0^{2\pi} d\sigma \left. g(U,V)\right|_{X(\sigma)}?

This was/is my understanding, so please correct me if I am wrong.

If I am not in trouble already, here is where I probably get into trouble. I then tried to reverse engineer this express in order to write the tangent vector

(4)U| X= 0 2πdσU μ(X(σ)) μ(X(σ))= 0 2πdσU| X(σ). \left. U\right|_X = \int_0^{2\pi} d\sigma U^\mu(X(\sigma)) \partial_\mu(X(\sigma)) = \int_0^{2\pi} d\sigma \left. U\right|_{X(\sigma)}.

Is that OK? I assumed it was and proceeded in my previous posts. If this is not correct, could you please correct me? I thought this was nice because it allows us to use the clever multi-index notation to write

(5)U=U (μ,σ) (μ,σ)= 0 2πdσU μ(X(σ)) μ(X(σ))= 0 2πU| X(σ). U = U^{(\mu,\sigma)} \partial_{(\mu,\sigma)} = \int_0^{2\pi} d\sigma U^\mu(X(\sigma)) \partial_\mu(X(\sigma)) = \int_0^{2\pi} \left. U\right|_{X(\sigma)}.

I didn’t think it felt right to denote the UU on the LHS with the same letter as the UU on the RHS, so I’ll denote the LHS with

(6)(L σU)| X=U (μ,σ) (μ,σ)= 0 2πU| X(σ). \left. (L_\sigma U)\right|_X = U^{(\mu,\sigma)} \partial_{(\mu,\sigma)} = \int_0^{2\pi} \left. U\right|_{X(\sigma)}.

I have a lot more to say, but I’ll wait to see if I’m already headed in the wrong direction before proceeding.

Cheers!

Eric

Posted by: Eric on July 25, 2004 3:41 PM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

seems like once again notation kept us from communication properly. ;-)

I can only agree with what you wrote here, if the right hand sides of the last three equations are defined by the left hand sides.

This was probably part of the problem, because naively these right hand sides seem to mean something else - at least to me! :-)

If UU is a vector field on target space then I would expect U| X(σ)U|_{X(\sigma)} to be the vector of that field at the point X(σ)X(\sigma). Then 0 2πdσU| X(σ)\int_0^{2\pi}d\sigma\, U|_{X(\sigma)} would be a formal continuous sum of vectors in target space.

But this is not what the left hand side is. The left hand side is really one single vector field, but on loop space.

But now I finally understand what you mean by that map LL. Sorry for being so slow. I guess you want to have

(1)L(U μ μ)= 0 2πdσU(X(σ))δδX μ(σ) L(U^\mu \partial_\mu) = \int_0^{2\pi} d\sigma\, U(X(\sigma)) \frac{\delta}{\delta X^\mu(\sigma)}
(2)L(V μdx μ)= 0 2πdσ μ(σ)V μ(X(σ)) L (V_\mu dx^\mu) = \int_0^{2\pi} d\sigma\, \mathcal{E}^{\dagger \mu}(\sigma) V_\mu(X(\sigma))

etc.

P.S.

Just for clarity, could we write this as

Yes, precisely. I should have made this clearer.

Posted by: Urs Schreiber on July 25, 2004 4:15 PM | Permalink | PGP Sig | Reply to this

Re: Still Trying

Hi Urs,

Thanks for that explanation. I feel like we are making progress, but I still have a ways to go before understanding this stuff.

First, what is the difference between

(1) 0 2πdσU μ(X(σ))δδX μ(σ) \int_0^{2\pi} d\sigma U^\mu(X(\sigma)) \frac{\delta}{\delta X^\mu(\sigma)}

and

(2) 0 2πdσU μ(X(σ))X μ| X(σ)? \int_0^{2\pi} d\sigma U^\mu(X(\sigma)) \left. \frac{\partial}{\partial X^\mu}\right|_{X(\sigma)}?

Oh yeah. I didn’t understand this at all…

If UU is a vector field on target space then I would expect U| X(σ)\left. U\right|_{X(\sigma)} to be the vector of that field at the point X(σ)X(\sigma).

So would I.

Then 0 2πdσU| X(σ)\int_0^{2\pi} d\sigma \left. U\right|_{X(\sigma)} would be a formal continuous sum of vectors in target space.

Yep.

But this is not what the left hand side is. The left hand side is really one single vector field, but on loop space.

Huh?

*light bulb*

A point in loop space is a loop in target space. A point pp in target space has a basis for tangent vectors at a point

(3)X μ| p, \left. \frac{\partial}{\partial X^\mu} \right|_p,

where μin{1,,n}\mu in \{1,\dots,n\}.

A point γ\gamma in loop space has a basis for tangent vectors

(4)X (μ,σ)| γ, \left. \frac{\partial}{\partial X^{(\mu,\sigma)}} \right|_\gamma,

where μin{1,,n}\mu in \{1,\dots,n\} and σ(0,2π)\sigma\in(0,2\pi). Therefore, there is a continuum of basis vectors on loop space.

Ok. I’ll have to think about that before I give it my blessing (as if it needs it) :)

If this is correct, than you have been being extremely/excessively “cavalier” with your use of the the symbol “\int”. A tangent vector LU| γ\left. LU\right|_\gamma in loop space should be written something more like

(5)LU| γ=pγU| p, \left. LU\right|_\gamma = \underset{p\in\gamma}{\bigoplus} \left. U\right|_p,

i.e. it is a continuum of target space tangent vectors, one for each point pp on γ\gamma. Then you have

(6)LU| γ+LV| γ=pγ(U| p+V| p). \left. LU\right|_\gamma + \left. LV\right|_\gamma = \underset{p\in\gamma}{\bigoplus} (\left. U\right|_p + \left. V\right|_p).

This means that

(7)LU| γ=pγU| p=U (μ,p) (μ,p)| γ \left. LU\right|_\gamma = \underset{p\in\gamma}{\bigoplus} \left. U\right|_p = \left. U^{(\mu,p)} \partial_{(\mu,p)}\right|_\gamma

is really a sum over μ\mu and a continuum direct sum over points pp of the loop, i.e.

(8)U (μ,p) (μ,p)| γ=pγ μ=1 nU μ μ| p. \left. U^{(\mu,p)} \partial_{(\mu,p)} \right|_\gamma = \underset{p\in\gamma}{\bigoplus} \sum_{\mu = 1}^n \left. U^\mu \partial_\mu \right|_p.

Argh!! Your use of the symbol “\int” killed me! :) This is so simple :)

Ok. Have you seen those big oversized plastic yellow wiffle bats that kids sometimes hit these big oversized wiffle balls around with? Well, if what I write above is correct, then imagine I’ve just *bonked* over the head with a wiffle bat!! :) Better yet, go out and get one of those wiffle bats and *bonk* yourself over the head for me please :)

Ok. Now that I think I understand what the basic idea is, I’ll need to go back and rethink a lot of things.

I’ll say this is significant progress.

Thanks!
Eric

PS: If LULU is a vector field on loop space and γ\gamma, γ\gamma' are distinct loops that intersect at a point pp in target space, do we demand that

(9)(LU| γ)| p=(LU| γ)| p \left. (\left. LU\right|_\gamma)\right|_p = \left. (\left. LU\right|_{\gamma'})\right|_p

or do we allow more general vector fields that might be multi-valued as viewed from target space? My first reaction is that I would not allow such multi-valued vector fields and demand that every vector field on loop space correspond to a vector field on target space. I understand that this would rule out the reparameterization Killing vector σ| γ\left.\partial_\sigma\right|_\gamma that you use so heavily, but I’m not sure that is a bad thing. There are other ways to display reparameterization invariance, e.g. have it built in by construction.

Posted by: Eric on July 26, 2004 5:36 AM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

I agree with some things - but not with everything! :-) Let’s see:

Therefore, there is a continuum of basis vectors on loop space.

Sure. There are as many basis vectors as the space has dimension!

it is a continuum of target space tangent vectors

In a sense for special cases. More precisely, it is the integral over functional derivatives. Not all these functional derivatives need to come from target space vectors. For instance 0 2πdσe inσδδX 1(σ)\int_0^{2\pi}d\sigma\, e^{in\sigma} \frac{\delta}{\delta X^1(\sigma)} is a vector on (parameterized) loop space which does not come from a target space vector the way you indicated. What you write is true for those elements of the tangent bundle over loop space which do come from applying the map LL to some target space vector. But not all vectors on loop space are of this form.

or do we allow more general vector fields that might be multi-valued as viewed from target space?

It is not in our hands to allow a vector field or not. These vector fields just exist. And on parameterized loop space in general (LU| γ) p(LU| γ ) p(LU|_\gamma)_p \neq (LU|_\gamma^\prime)_p. This is nothing to worry about. It is just an indication that the map LL does not have so many natural properties as you are maybe expecting.

There are other ways to display reparameterization invariance, e.g. have it built in by construction.

Parameterized loop space is the object that I am interested in. I don’t know how to handle unparameterized loop space efficiently except for getting it from parameterized loop space my dividing out by the action of reparameterizations; and, worse, it does not pertain to the applications that I need loop space for. Restricting to vector fields on loop space which sit in the range of LL similarly is not an option for these applications - and to be honest I don’t understand why that that would be interesting or desirable. In stringy terms it would correspond to restricting attention to the center-of-mass mode of the string, while ignoring all excitations.

Posted by: Urs Schreiber on July 26, 2004 10:28 AM | Permalink | PGP Sig | Reply to this

Re: Still Trying

Hi Urs :)

I agree with some things - but not with everything! :-)

Well, I hope when we iron out that last notational issues that you will agree with MOST of what I said :)

For instance, I hope that you agree that an arbitrary tangent vector LU| γ\left. LU\right|_\gamma in loop space can be expressed as

(1)LU| γ=pS 1U| γ(p), \left. LU\right|_\gamma = \underset{p\in S^1}{\bigoplus} \left. U\right|_{\gamma(p)},

where

(2)γ:S 1. \gamma:S^1\to\mathcal{M}.

Wait! I can feel you objecting already, but wait :) Let me explain the notation.

The tangent vector

(3)U| γ(p) \left. U\right|_{\gamma(p)}

is meant to be just some choice of tangent vector from the tangent space and, for the time being, we do not need to think of UU as a vector field on target space, i.e. we are allowing multi-valued tangent vectors on target space.

Do you agree with this so far?

Just to clarify, if p,qS 1p,q\in S^1 and pqp\ne q with γ(p)=γ(q)\gamma(p) = \gamma(q) we could have

(4)U| γ(p)U| γ(q). \left. U\right|_{\gamma(p)} \ne \left. U\right|_{\gamma(q)}.

For now.

Eric

Posted by: Eric on July 26, 2004 3:10 PM | Permalink | Reply to this

Re: Still Trying

Hi Urs,

In private email, we said

Hi Eric -

I am typing with single digits on my left hand. I was involved in a cycling accident (on my brand new racing bike!). I am OK, but broke my wrist. I won’t make it to work.

O dear. I wish you all the best then!

Fortunately we are living in an age of information technology where we can keep you in touch with the outside world - such as the latest on flat connections on loop space! ;-)

Of course you can think of the tuple of components as the vector if you want. I agree! :-)

Whoa! Where did this statement come from? I would be the last person to promote such an idea :)

I would have thought so, too. ;-)

So in finte dimensions you would always write a vector as

(1)V= μV μ μ V = \sum_\mu V^\mu \partial_\mu

instead of

(2)V= μV μ, V = \oplus_\mu V^\mu ,

right?

Apparently I caused a problem by, without much ado, turning the sum in

(3) μV μ μ \sum_\mu V^\mu \partial_\mu

into an integral when we are working on a space of uncountably large dimension.

(4)dσV(σ)δδX(σ). \int d\sigma V(\sigma) \frac{\delta}{\delta X(\sigma)} .

But recall that δδX(σ)\frac{\delta}{\delta X(\sigma)} is the fucntional derivative. It acts on a “coordinate” X(σ)X(\sigma) on our space as

(5)δδX(σ)X(κ)=δ(σκ). \frac{\delta}{\delta X(\sigma)} X(\kappa) = \delta(\sigma-\kappa) .

This delta-distribution on the right forces us to use the integral, because otherwise we’d get something like

(6)( σV(σ)δδX(σ))X(κ)=V(κ)δ(0), (\sum_\sigma V(\sigma) \frac{\delta}{\delta X(\sigma)}) X(\kappa) = V(\kappa) \delta(0) ,

which is not well defined.

Therefore the integral really is the right generalization of the sum over basis elements of the vector space.

But let’s continue this on the SCT.

Ok! :)

I never thought of physics as an endurance sport, but with the shape my wrist is in (I can’t even see an orthopedist to have it set until tommorrow!) just getting that last bit to compile was exhausting :)

I think we are making good progress so thank you for bearing with me.

I think that expressing a tangent vector on loop space in terms of a continuum direct sum of tangent vectors on target space is a good idea. What I am suggesting is not as bad as writing a vector as a direct sum of its components as you said here

So in finte dimensions you would always write a vector as

(7)V= μV μ μ V = \sum_\mu V^\mu \partial_\mu

instead of

(8)V= μV μ, V = \oplus_\mu V^\mu ,

right?

First of all, of course that would be terribly coordinate dependent, which goes against everything I stand for and I think you know that :)

Please take a moment and draw some pictures or something to really think about what I mean when I write

(9)LU| γ=pS 1U| γ(p). \left. LU\right|_\gamma = \underset{p\in S^1}{\bigoplus} \left. U\right|_{\gamma(p)}.

This expression is absolutely parameterization and coordinate independent. To see what this looks like, draw a picture of a loop γ(S 1)\gamma(S^1) in some target space and for each point pp of the loop assign (draw) a tangent vector U| p\left. U\right|_p. If the loop is self-intersecting, then you can acually choose two distinct tangent vectors at the same point in target space. Hence, UU need not be a vector field on \mathcal{M} because of this possible multi-valuedness. The simplest example would be to draw the tangent vector

(10)K| γ=s[0,2π]dds| γσ(s), \left. K\right|_\gamma = \underset{s\in [0,2\pi]}{\bigoplus} \left. \frac{d}{ds}\right|_{\gamma\circ\sigma(s)},

where

(11)γ:S 1 \gamma:S^1\to\mathcal{M}

and

(12)σ:[0,2π]S 1. \sigma:[0,2\pi]\to S^1.

In other words, at each point of the loop draw a vector tangent to the loop. If the loop is a figure “8”, then at the intersection point, you will have two distinct tangent vectors.

I hope this helps clarify what I mean when I write

(13)LU| γ=pS 1U| p. \left. LU\right|_\gamma = \underset{p\in S^1}{\bigoplus} \left. U\right|_p.

It seems I should also clarify another important point. When I write S 1S^1, I mean the manifold, i.e. it is parameterizable, but not yet parameterized. A parameterization of S 1S^1 would be a map

(14)σ:[0,2π]S 1. \sigma:[0,2\pi]\to S^1.

I have tried to make this explicit throughout. Hence, I am really justified in saying that

(15)LU| γ=pS 1U| γ(p) \left. LU\right|_\gamma = \underset{p\in S^1}{\bigoplus} \left. U\right|_{\gamma(p)}

is parameterization and coordinate independent.

Ok. So far I feel like I’ve exerted the energy equivalant of a 15K run (and have spent about the same amount of time :)). Let’s see how much further I can go :)

For the time being, let’s not sum over repeated indices, i.e. I will write all summations explicitly.

The tangent vector

(16)X μ(γ(p))| γ=X μ| γ(p) \left. \frac{\partial}{\partial X^\mu(\gamma(p))} \right|_{\gamma} = \left. \frac{\partial}{\partial X^\mu} \right|_{\gamma(p)}

is a basis vector for T γ(ℒℳ)T_\gamma(\mathcal{LM}), where pS 1p\in S^1. There is obviously a continuum of these so the tangent space at a point in loop space is infinite dimensional (stating the obvious). However, this infinity is split up into a countable part and an uncountable part. In particular, we have

(17)U(γ(p))| γ= μ=1 nU μ(γ(p))X μ(γ(p))| γ. \left. U(\gamma(p))\right|_{\gamma} = \sum_{\mu =1}^n U^\mu(\gamma(p)) \left. \frac{\partial}{\partial X^\mu(\gamma(p))} \right|_{\gamma}.

In general, there is an uncountably many such tangent vectors, one for each pS 1p\in S^1. The crucial question now is,

How are going to combine this continuum of tangent vectors into a single tangent vector?

Unless I am mistaken, I think you would suggest that we introduce a parameterization σ:[0,2π]S 1\sigma:[0,2\pi]\to S^1 and integrate them, i.e.

(18)U σ| γ= 0 2πdσU(γσ)| γσ. \left. U_\sigma\right|_\gamma = \int_0^{2\pi} d\sigma \left. U(\gamma\circ\sigma)\right|_{\gamma\circ\sigma}.

However, before we can integrate, we need to choose a measure on the loop. It is obvious that the choice you have been suggesting is parameterization specific. This is why I included the subscript σ\sigma above. My first suggestion was to choose the measure vol\vol obtained by pulling back the metric tensor from target space to the loop. I think this would be an improvement over the blatantly parameterization specific construction U σ| γ\left. U_\sigma\right|_\gamma, but it still has some drawback, e.g. it requires that there should be a metric on target space. I think the correct construction should allow us to study solely topological properties so this is out for the time being.

I am now suggesting that maybe we should consider an alternative. Instead of integrating, we take a continuum direct sum

(19)U| γ=pS 1U(γ(p))| γ. \left. U\right|_\gamma = \underset{p\in S^1}{\bigoplus} \left. U(\gamma(p))\right|_\gamma.

This does not force us to choose an ad hoc measure and is clearly parameterization and coordinate independent by construction.

Although I think it would be neat if we could treat σ\sigma as simply a continuum indices and compute away, it is probably wise to think about whether this is really justified. My gut tells me that these continuum indices are manifestly different than their countable counterparts and require a little more care than we have been providing so far.

Phew! 20K! I need some Gatorade (not mention Percocet!!) :)

Best wishes,
Eric

Posted by: Eric on July 27, 2004 10:00 PM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

have we agreed if we are talking about parameterized or unparameterized loop space? I for one mean and always have meant parameterized loop space.

I am still a little confused about what you write, for instance because you have XX and γ\gamma appearing together in your formulas as if this were two different things. Maybe by γ\gamma you mean the unparameterized loop? At least on parameterized loop space X:(0,2π)X : (0,2\pi) \to \mathcal{M} is the loop.

But I feel the discussion would benefit from some actual calculations.

Consider two vector fields KK and VV on parameterized loop space with components K (μ,σ)(X)=X μ(σ)K^{(\mu,\sigma)}(X) = X^{\prime \mu}(\sigma) and V (μ,σ)(X)=cos(X λ(σ)X κ(σ)η λκ)e inσV^{(\mu,\sigma)}(X) = \cos(X^\lambda(\sigma)X^\kappa(\sigma)\eta_{\lambda\kappa})e^{in\sigma}.

What is their Lie bracket?

You probably know how I would caclulate that, but I would like to see you calculating it in your notation. I am hoping seeing your notation in action will clarify it for me. (Of course I simply made up these vectors in order to have an example. Pick any other not too trivial vector fields if you like.)

Posted by: Urs Schreiber on July 28, 2004 10:41 AM | Permalink | PGP Sig | Reply to this

Re: Still Trying

Good morning :)

have we agreed if we are talking about parameterized or unparameterized loop space? I for one mean and always have meant parameterized loop space.

Too bad :) Did you at least draw the pictures I asked you to? :) After the pain I went through typing it, I hope you could at least do that! :)

For the record, let me state once and for all that I know you are dealing with a parameter-dependent loop space (PLS). I will refer to it as “parameter dependent” instead of “parameterized” because the latter does not make it as obvious what a nasty space it really is :)

I am trying to do a few things simultaneously. The first is to understand the parameter dependent machinery you are working with. I think I am forming a pretty good idea about that. Another thing I am attempting to do is to demonstrate that there is an alternative parameter independent machinery that can do the same thing you want to do, but only more naturally. With that said, please understand that I am not claiming that there is anything mathematically incorrect in your papers (nor all the other papers you refer to that deal with parameter dependent loop space). Rather, I am suggesting that there is a better way to get the same job done.

Here is the way I’m thinking about it. Take ordinary differential geometry (ODG) on manifolds for example. There, we can pretty much demonsstrate all the machinery in pictures because ODG is coordinate independent by construction. However, let’s forget that we know about diffeomorphisms and convince ourselves that we must describe a point as an nn-tuple of numbers. Call the resulting coordinate dependent space “parameter-dependent point space” (PPS). Before long we would discover that the same point in some manifolds actually occupies many different points in PPS. If the original manifold was nn-dimensional, then PPS could be thought of as R NR^N for NnN\gg n. We could obviously define calculus on R NR^N, but we would find that many things we compute will depend on coordinates, which shouldn’t come as a surprise. This sad situation is cured one day when someone discovers that we can get coordinate independent results by considering the kernel of some operator K\mathcal{L}_K. We are saved! :)

No. Not really because then some guy named Riemann comes along and demonstrates that we could have built up a theory called ODG that is independent of coordinates by construction so that we no longer need to look at kernels of operators to get coordinate independent results.

This is precisely the same situation I see when I look at the loop space literature you refer to.

I’m obviously not qualified to play the role of Riemann for loop space. The best I can do is stand up on the highest mountain I can find and scream, “We need a Riemann to fix this!” Something like a mathematical physics version of the “Bat Signal.” :)

Let me refer to the, yet to be defined, differential geometry of unparameterized loop space as OLS in contrast to PLS. The situation with OLS snd PLS might not be as obviously severe as that between ODG and PPS because the latter is finite dimensional so it is clear that NnN\gg n. With the former, it might not be so obvious because both PLS and OLS are infinite dimensional.

To summarize once again the answer to your question, I understand that you are dealing with PLS while I am simultaneously trying to learn PLS and develop OLS.

I am still a little confused about what you write, for instance because you have XX and γ\gamma appearing together in your formulas as if this were two different things. Maybe by γ\gamma you mean the unparameterized loop? At least on parameterized loop space X:(0,2π)X:(0,2\pi)\to\mathcal{M} is the loop.

I am getting tired and nauseous from pain medication, so let me just quote my earlier remark

It seems I should also clarify another important point. When I write S 1S^1, I mean the manifold, i.e. it is parameterizable, but not yet parameterized. A parameterization of S 1S^1 would be a map

(1)σ:[0,2π]S 1. \sigma:[0,2\pi]\to S^1.

I have tried to make this explicit throughout. Hence, I am really justified in saying that

(2)LU| γ=pS 1U| γ(p) \left. LU\right|_\gamma = \underset{p\in S^1}{\bigoplus} \left. U\right|_{\gamma(p)}

is parameterization and coordinate independent.

Therefore, since S 1S^1 is unparameterized, the loop

(3)γ:S 1 \gamma:S^1\to\mathcal{M}

is unparameterized. To get a parameterized map we need to compose with a parameterization so that

(4)X=γσ:[0,2π]. X = \gamma\circ\sigma:[0,2\pi]\to\mathcal{M}.

This is pretty much what is normally done in ODG.

What is their Lie bracket?

I will do this later. I need to lay down now :)

Eric

Posted by: Eric on July 28, 2004 1:59 PM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

I will do this later.

No need. It applies only to parameterized loop space (which, I would like to emphasize, should be read as ‘[parameterized loop] space’.

The vectors on the space of unparameterized loops are indeed just vector fields on these loops in target space, at least as long as there are no self-intersections (which could be subtle to deal with). I was all along not fully aware that this was all you meant to say. Sorry for that.

Now that we have clarified this (hopefully :-) I can’t refuse to add a little ‘philosophical’ comment myself: It is not always good to divide out all redundancies in physical theories. Gauge theory is elegant when the redundancies are kept and becomes awkward when a gauge is fixed or when group averaging is performed. The gauge freedom of parameter on the string is closely related to the gauge freedom of the target space fields it represents.

Posted by: Urs Schreiber on July 28, 2004 6:58 PM | Permalink | PGP Sig | Reply to this

Re: Still Trying

I will do this later.

No need.

Phew :) I just got back from the orthopedist. It looks like my wrist is not in good shape. I’ll need a minor surgery in the morning. Let’s see how long I can hold out here before passing out again :)

It applies only to parameterized loop space (which, I would like to emphasize, should be read as ‘[parameterized loop] space’.

I should be able to do this, or something like it, for unparameterized loop space (ULS). With my first thoughts, I didn’t get very far because it is not obvious how to define smooth vector fields on loop space, be it PLS or ULS. I have to admit I didn’t get much further than my “first thoughts” yet :)

It is not always good to divide out all redundancies in physical theories. Gauge theory is elegant when the redundancies are kept and becomes awkward when a gauge is fixed or when group averaging is performed. The gauge freedom of parameter on the string is closely related to the gauge freedom of the target space fields it represents.

I understand this and agree 100%. However, I think it is also possible to take this idea too far. My example of “[parameterized point] space” is an extreme case, where too much redundancy was introduced. I may soon realize that having this redundancy for “[parameterized loop] space” is not overkill, but I am not at that point yet. It still seems excessive to me.

Now, I need to pass out once again. Hopefully, I can continue work in my dreams.

Good night for now! :)

Eric

Posted by: Eric on July 28, 2004 8:22 PM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

I wish you all the best and that your wrist will heal soon.

Maybe we should postpone this discussion until you feel better. I’ll be on holidays the next two weeks, anyway.

I haven’t thought much about unparameterized loop space, because I have little use for it in the contexts that interest me. On parameterized loop space it is easy to get the vector fields. But on unparameterized loop space things seem to become really complicated.

Single self-intersections are only the easiest aspect of the problem, which one can probably deal with in the naive way.

But it seems to become really intricate as soon as we have degenerate loops.

For instance consider the constant loop, which occupies just a single point in target space. It is not clear at all how to define a vector on loop space at this point in loop space.

Also, it is hard to deal with loops that are supposed to wrap around themselves several times.

But never mind for the moment. Your health is more important than loop space - even unparameterized loop space!

Posted by: Urs Schreiber on July 29, 2004 12:27 PM | Permalink | PGP Sig | Reply to this

Re: Still Trying

Hi Urs,

I’ll miss the discussions while you are on vacation, but I guess the timing is optimal in the sense that I can’t really write very well for the next several weeks anyway :) You and your girlfriend deserve the break, so have a great time! :)

In the meantime, I’ll keep thinking about this stuff. Maybe I will have something useful to say, but regardless, I think it is good (for me at least) to think about these things carefully. My copy of Zwiebach’s book arrived and I’ve been reading it when I get enough energy. It has given me several ideas about parameterizations.

I am still pretty weak, but wanted to say a couple of things before signing off…

I haven’t thought much about unparameterized loop space, because I have little use for it in the contexts that interest me.

I don’t understand how formulating a parameterization independent loop space differential geometry would not be interesting to you. I understand that there is already in place a set of tools that are parameterization dependent and that you can use these tools for calculating things of interest, e.g. expectation values, but I don’t understand why finding a better(?) way to do things would not be interesting.

On parameterized loop space it is easy to get the vector fields. But on unparameterized loop space things seem to become really complicated.

At the moment, I don’t see why parameterized loop space would be any simpler to get vector fields.

But it seems to become really intricate as soon as we have degenerate loops.

Naively, I don’t see why completely degenerate loops would be any more difficult than simple self-intersecting loops.

Ok. One technical remark before laying down again :)

When thinking of unparameterized (but parameterizable) loop space, I’m finding this idea of viewing things as continuum direct sums on target space to be helpful. Once you start doing this, I found some intermediate concepts to be helpful. These intermediate concepts can probably be referred to as “pp-loop forms” and “pp-loop vector fields.” Both of which are defined on target space. Let

(1)γ p:T p, \gamma^p: T^p\to\mathcal{M},

denote a pp-loop, where

(2)T p=S 1××S 1ptimes. T^p = \underset{p times}{\underbrace{S^1\times\cdots\times S^1}}.

is a pp-torus. For a point rr\in\mathcal{M}, we can denote 0-loops via

(3)γ r 0:T 0 \gamma^0_r:T^0\to\mathcal{M}

with

(4)γ r 0(T 0)=r. \gamma^0_r(T^0) = r.

A pp-loop qq-form at a loop γ p\gamma^p may then be defined as

(5) pα| γ p=rT pα| γ p(r), \left. \mathcal{L}^p\alpha\right|_{\gamma^p} = \underset{r\in T^p}{\bigoplus} \left. \alpha\right|_{\gamma^p(r)},

where α| γ p(r)\left. \alpha\right|_{\gamma^p(r)} is a qq-covector at the point γ p(r)\gamma^p(r)\in\mathcal{M}. For 1-loop qq-forms we can drop the indices and write simply

(6)α| γ=pS 1α| γ(p), \left. \mathcal{L}\alpha\right|_\gamma = \underset{p\in S^1}{\bigoplus} \left. \alpha\right|_{\gamma(p)},

as I had written in a previous post.

A pp-loop vector field can be defined similarly, where the 1-loop vector field at a loop γ\gamma is given by

(7)U| γ=pS 1U| γ(p). \left. \mathcal{L}U\right|_\gamma = \underset{p\in S^1}{\bigoplus} \left. U\right|_{\gamma(p)}.

In general, pp-loop forms and pp-loop vector fields will be multi-valued on target space due to the possibility of self-intersecting and degenerate loops. An exception would be for 0-loop forms and 0-loop vector fields, in which case we have

(8) 0αα \mathcal{L}^0\alpha\sim\alpha

and

(9) 0UU. \mathcal{L}^0U\sim U.

In particular, consider a 1-loop 0-form

(10)ϕ| γ=pS 1ϕ| γ(p). \left. \mathcal{L}\phi\right|_\gamma = \underset{p\in S^1}{\bigoplus} \left. \phi\right|_{\gamma(p)}.

For each point pS 1p\in S^1 we assign a value at γ(p)\gamma(p)\in\mathcal{M}. If γ\gamma is self-intersecting or degenerate in any way, we may get multiple values assigned to the same point in target space.

If we denote the space of pp-loop qq-forms on \mathcal{M} by Ω (p,q)()\Omega^{(p,q)}(\mathcal{M}), we will want a map

(11)tr:Ω (p,q)()Ω q( p), \tr: \Omega^{(p,q)}(\mathcal{M})\to\Omega^q(\mathcal{L}^p\mathcal{M}),

i.e. a pp-loop qq-form on target space gives rise, via tr\tr, to a qq-form on (unparameterized) pp-loop space.

To understand the importance of the tr\tr map, consider a 1-loop 0-form ϕ\mathcal{L}\phi. If this is to make sense, we should have

(12) γtr(ϕ)=tr(ϕ)| γ. \int_\gamma \tr(\mathcal{L}\phi) = \left. \tr(\mathcal{L}\phi)\right|_\gamma.

I want to demand that tr\tr commutes with the evaluation map so that

(13)tr(ϕ)| γ=tr(ϕ| γ)R. \left. \tr(\mathcal{L}\phi)\right|_\gamma = \tr(\left.\mathcal{L}\phi\right|_\gamma)\in R.

Since ϕ| γ\left. \mathcal{L}\phi\right|_\gamma is essentially a string of values assigned to each point on γ\gamma, it is clear that tr\tr obtains a single value from the string of values. The obvious interpretation for tr\tr is now clear. It is essentially integration along the loop.

I really hope that I can convince you of the importance of this issue because I think it transcends whether we are dealing with parameterized or unparameterized loops. In both cases, I believe this tr\tr map plays a crucial role.

If that is the case, which I really do think it is, then we should really think about what it means to integrate along the loop. In order to integrate along the loop, we must first choose a measure. I believe the choice of measure we make is very important. Even if we are dealing with parameterized loop space, I think there is a case to be made for choosing a measure that is parameterization independent.

I have more to say, but I need to lay down. I hope I can convince you to meditate about this for at least ten minutes. I think you will quickly see what I am talking about even if I haven’t successfully communicated my ideas yet.

Take care!
Eric

Posted by: Eric on July 31, 2004 2:01 PM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

here is a very quick response:

Parameterized loop space is important because most string states are not rep invariant. Only their expectation values are. The only rep invariant states are the so-called boundary states. So when you restrict to unparameterized loop space you can only deal with boundary states, not with the usual string states. But even in that case it is much easier to use the parameterized looks and project on functions which take the same value on equivalence classes of loops.

On parameterized loop space it is very simple to get the vector fields, because they are simply linear combinations of the holonomic basis vectors δδX μ(σ)\frac{\delta}{\delta X^\mu(\sigma)}. As I have said before, all you have to do is to think of sigma as an extra index and then everything goes through as usual.

I am currently writing on a machine without MathML support, so I cannot read and comment on the formulas you give.

Posted by: Urs on July 31, 2004 3:26 PM | Permalink | Reply to this

Re: Still Trying

Hi Urs :)

It’s good to hear from you :) I will go ahead and reply to this and I may even write several subsequent posts, but let me just say once again that I hope you have a great trip and I will be patiently awaiting your return. Even when you return, I know you will have a lot of things to catch up on and this will be here for whenever you settle down. No matter how long it takes. In fact, I will completely understand if you never get around to responding. Sometimes I learn a lot already just by asking the question :) The last thing I want to do is be a gnat in your ear :)

Parameterized loop space is important because most string states are not rep invariant. Only their expectation values are.

Hmm… I don’t understand this statement. I am thinking that if the expectation values you compute are all independent of parameterization, then you should be able to find representations which are independent of parameterization as well. Is that obviously incorrect somehow? You seem to be responding, although understandeably due to shortage of time, in broad strokes, where you seem to suggest that everything about loop space, parameterized or not, is already known. For example, here

The only rep invariant states are the so-called boundary states. So when you restrict to unparameterized loop space you can only deal with boundary states, not with the usual string states.

it seems like you are telling me that even if I successfully develop (rediscover?) a full working framework of unparameterized (but parameterizable) loop space differential geometry that I will only be able to capture a small subset of relevant string states, i.e the so-called boundary states. I could be wrong, but this sounds a little premature given that, as far as I know, such a working loop space differential geometry does not exist :)

Maybe I should make my reservations a little more blatant so that perhaps you can understand my motivation a little better. On occassion, I get the impression that you are actually trying to study string theory. I don’t know what gives me this impression :) This implies a certain reverence, i.e. you seem to have some belief that string theory is somehow correct as is. If it isn’t already obvious, I do not share this reverence. Not at all. In fact, I do not hesitate to think (I actually assume it) that string theory needs to be improved.

That is not to say that I think string theory doesn’t have anything interesting to say because it obviously has a lot of interesting things say. If I weren’t interested, I wouldn’t spend so much time and money trying to learn it! :)

The most basic and profound thing I’ve learned from string theory is that extended objects, e.g. strings, branes, etc., are important :) On the other hand, if you start out with the notion that

Strings are fundamental

I don’t think there is only one road this principle leads down. So basically I am starting with this notion and seeing where it takes me. As far as I’m concerned, if it leads me to string theory as we know it, then that is good for string theory, but it isn’t really a concern of mine whether it does or not. I only care that I uncover something sensible.

So in my personal journey, with a lot of help from you, I have come to expect that the correct formulation of loop space differential geometry will have something important to say for a theory of strings (and other extended objects). To be blunt again, I have reservations about your formulation of loop space differential geometry. You seem to be reassured by the fact that your formulation corresponds to well-known objects in string theory. Rather than take this as reassurance that you are doing something right, my reservations about parameter-dependent loop space differential geometry suggests to me that the corresponding objects in string theory may be problematic. Imagine that! :)

To be a little more specific, I think the measure you have chosen in order to change a summation over indices to an integral is bogus. Please correct me if you think I’m wrong, but I don’t see any way that

(1)α,U γ=α (μ,σ)U (μ,σ)=12π 0 2πdσα μ(σ)U μ(σ) \langle\alpha,U\rangle_\gamma = \alpha_{(\mu,\sigma)} U^{(\mu,\sigma)} = \frac{1}{2\pi} \int_0^{2\pi} d\sigma \alpha_\mu(\sigma) U^\mu(\sigma)

can be correct. Sorry! :) Using a notation more similar to Zwiebach’s, an acceptible definition would be

(2)α,U γ= 0 adsα μ(s)U μ(s)= 0 2πdσsσα μ(σ)U μ(σ), \langle\alpha,U\rangle_\gamma = \int_0^a ds \alpha_\mu(s) U^\mu(s) =\int_0^{2\pi} d\sigma \frac{\partial s}{\partial\sigma} \alpha_\mu(\sigma) U^\mu(\sigma),

where ss is parameterized by length (energy) along the string and α μ(σ)\alpha_\mu(\sigma) is short for α μ(sσ)\alpha_\mu(s\circ\sigma). This has the, perhaps undesirable, effect that I discussed before that the evaluation vanishes as the loop shrinks to zero. If this really is undesirable (I’m not sure it is), we can consider the alternative

(3)α,U γ=1a 0 adsα μ(s)U μ(s)=1a 0 2πdσsσα μ(σ)U μ(σ). \langle\alpha,U\rangle_\gamma = \frac{1}{a} \int_0^a ds \alpha_\mu(s) U^\mu(s) = \frac{1}{a}\int_0^{2\pi} d\sigma \frac{\partial s}{\partial\sigma} \alpha_\mu(\sigma) U^\mu(\sigma).

This version would coincide with your’s as the loop shrinks to zero, but would differ for finite loops. It would also agree if s=aσ2πs = a\frac{\sigma}{2\pi}.

This has a kind of neat QM interpretation as an expectation value of an operator over a string vacuum state |1 γ|1_\gamma\rangle. We can define

(4)|α γ=α |1 γ |\alpha_\gamma\rangle = \alpha^\dagger|1_\gamma\rangle

and

(5)|(i Uα) γ=U|α γ=Uα |1 γ |(i_U\alpha)_\gamma\rangle = U|\alpha_\gamma\rangle = U\circ\alpha^\dagger|1_\gamma\rangle

so that

(6)Uα γ=1 γ|Uα |1 γ1 γ|1 γ. \langle U\circ\alpha^\dagger\rangle_\gamma = \frac{\langle 1_\gamma| U\circ\alpha^\dagger|1_\gamma\rangle}{\langle 1_\gamma|1_\gamma\rangle}.

Now we have

(7)1 γ|Uα |1 γ= 0 adsα μ(s)U μ(s) \langle 1_\gamma| U\circ\alpha^\dagger|1_\gamma\rangle = \int_0^a ds \alpha_\mu(s) U^\mu(s)

and

(8)1 γ|1 γ= 0 ads=a \langle 1_\gamma| 1_\gamma\rangle = \int_0^a ds = a

so that

(9)Uα γ=α,U γ=1a 0 adsα μ(s)U μ(s). \langle U\circ\alpha^\dagger\rangle_\gamma = \langle \alpha, U\rangle_\gamma = \frac{1}{a} \int_0^a ds \alpha_\mu(s) U^\mu(s).

This is not completely uninteresting :)

Gotta run for now.

Best wishes,
Eric

Posted by: Eric on July 31, 2004 7:39 PM | Permalink | Reply to this

Re: Still Trying

Please correct me if you think I’m wrong, but I don’t see any way that

(1)α,U γ=α (μ,σ)U (μ,σ)=12π 0 2πdσα μ(σ)U μ(σ) \langle\alpha, U\rangle_\gamma = \alpha_{(\mu,\sigma)} U^{(\mu,\sigma)} = \frac{1}{2\pi}\int_0^{2\pi} d\sigma\alpha_\mu(\sigma) U^\mu(\sigma)

can be correct.

Perhaps another way to say this more succinctly is that if X μ(σ)X^\mu'(\sigma') is a different coordinate patch with different parameterization, we should demand

(2)α,U γ=α (μ,σ)U (μ,σ)=α (μ,σ)U (μ,σ). \langle\alpha, U\rangle_\gamma = \alpha_{(\mu,\sigma)} U^{(\mu,\sigma)} = \alpha_{(\mu',\sigma')} U^{(\mu',\sigma')}.

This requirement seems obvious and unavoidable to me. To satisfy this requires a better choice of measure when writing a summation over indices as an integral. The length-based (energy-based) measure clearly satisfies this criterion.

Eric

Posted by: Eric on July 31, 2004 8:45 PM | Permalink | Reply to this

Re: Still Trying

On parameterized loop space it is very simple to get the vector fields, because they are simply linear combinations of the holonomic basis vectors δδX μ(σ)\frac{\delta}{\delta X^\mu(\sigma)}.

I hope that you will agree that this is nothing special about parameterized loop space. I can do precisely the same thing with unparameterized (but parameterizable) loop space. The only difference as far as I can see is that when you construct the proper loop space you can write down

(1)U=U (μ,σ) (μ,σ)=U (μ,σ) (μ,σ), U = U^{(\mu,\sigma)} \partial_{(\mu,\sigma)} = U^{(\mu',\sigma')} \partial_{(\mu',\sigma')},

where X μ(σ)X^\mu(\sigma) and X μ(σ)X^\mu'(\sigma') are different parameterizations of the same loop.

As I have said before, all you have to do is to think of sigma as an extra index and then everything goes through as usual.

Right. I understand this, but again it is not special about parameterization dependent loops. They only need to be parameterizable. If you choose the correct measure for changing the summation to integration everything works out in a parameterization independent manner. I hope that if I keep saying the same thing enough times in different ways that I will eventually make sense :)

Eric

Posted by: Eric on July 31, 2004 11:56 PM | Permalink | Reply to this

Re: Still Trying

Hi Urs,

After rereading your post, I see something you will probably disagree with in my response. Hopefully this will clarify things. You said

For instance 0 2πdσe inσδδX 1(σ)\int_0^{2\pi} d\sigma e^{in\sigma} \frac{\delta}{\delta X^1(\sigma)} is a vector on (parameterized) loop space which does not come from a target space vector the way you indicated.

First of all, I would suggest rewriting this as

(1)s[0,2π]e insδδX μ| γσ(s), \underset{s\in[0,2\pi]}{\bigoplus} e^{in s} \left. \frac{\delta}{\delta X^\mu}\right|_{\gamma\circ\sigma(s)},

where

(2)σ:[0,2π]S 1 \sigma: [0,2\pi]\to S^1

and

(3)γ:S 1. \gamma : S^1\to\mathcal{M}.

This might not be 100% correct either, but the point is you need to stop using that “\int” symbol to mean continuum direct sum. In an integral you do not remember the point you summed over. In a direct sum, you do.

Second of all, I’m willing to bet that you don’t really care about this tangent vector on loop space anyway. What you probably care about is

(4)s[0,2π]e insdX μdsδδX μ| γσ(s), \underset{s\in[0,2\pi]}{\bigoplus} e^{in s} \left. \frac{dX^\mu}{ds} \frac{\delta}{\delta X^\mu}\right|_{\gamma\circ\sigma(s)},

which, I believe, can be rewritten as

(5)s[0,2π]e inss| γσ(s). \underset{s\in[0,2\pi]}{\bigoplus} e^{in s} \left. \frac{\partial}{\partial s}\right|_{\gamma\circ\sigma(s)}.

This tangent vector obviously has a target space representation as the bunch of tangent vectors tangent to the loop at each point in target space weighted by a Fourier mode.

Regardless of whether the above tangent vector on loop space is equivalent to the counter example you gave, I suspect that the tangent vector

(6)s[0,2π]e inss| γσ(s) \underset{s\in[0,2\pi]}{\bigoplus} e^{in s} \left. \frac{\partial}{\partial s}\right|_{\gamma\circ\sigma(s)}

does what you want it to do, i.e. computes Fourier modes of functions defined on loops. Or something like that :)

Ok. Now you can let me have it. I’m ready for brutal enlightenment :)

Best wishes,
Eric

Posted by: Eric on July 26, 2004 3:55 PM | Permalink | Reply to this

Re: Still Trying

I admit I am not being careful about distinguishing between δ\delta, \partial, and dd, but I think what I wanted to write was

(1)s[0,2π]e insdds| γσ(s). \underset{s\in[0,2\pi]}{\bigoplus} e^{ins} \left. \frac{d}{ds}\right|_{\gamma\circ\sigma(s)}.

Eric

Posted by: Eric on July 26, 2004 4:01 PM | Permalink | Reply to this

Re: Still Trying

Hi Eric -

I am bot a notation purist and will be content if we find some way to communicate abstract ideas - but… :-)

First but: A vector is not a direct sum, is it? It is an element of a vector space and if that has a basis it is a linear combination of the basis elements of that vector space. Here the basis elements are δδX μ(σ)\frac{\delta}{\delta X^\mu(\sigma)} and when I write a linear combination of these I do want to write an integral, so that the result really is a nice derivation on the algebra of fucntions over loop space, as it should be.

Second but: I do need modes of vector fields other than the rep Killing vector. I do not understand why you want to ignore most of the vector fields that there are on loop space.

Let me emphasize again that I think the best way to think about loop space differential geometry is to put the spacetime index μ\mu together with the parameter σ\sigma into a single index I=(μ,σ)I= (\mu,\sigma) and then proceed precisely as in finitely many dimensions. If you accept this prescription it automatically answers all the questions that we are currently discussing. If you do not accept it then let me know why! :-)

Posted by: Urs Schreiber on July 26, 2004 4:50 PM | Permalink | PGP Sig | Reply to this

Re: Still Trying

Hi Urs :)

First but: A vector is not a direct sum, is it? It is an element of a vector space and if that has a basis it is a linear combination of the basis elements of that vector space.

A direct sum can be a vector and in what I described it is a vector. Really all we need to check is that we have well defined scalar multiplication and addition. For scalar multiplication we have

(1)k(LU| γ)=pS 1k(U| γ(p)). k(\left. LU\right|_\gamma) = \underset{p\in S^1}{\bigoplus} k(\left. U\right|_{\gamma(p)}).

Addition is similarly obvious

(2)LU| γ+LV| γ=pS 1(U| γ(p)+V| γ(p)). \left. LU\right|_\gamma + \left. LV\right|_\gamma = \underset{p\in S^1}{\bigoplus} (\left. U \right|_{\gamma(p)} + \left. V \right|_{\gamma(p)}).

This seems to be a well defined vector space to me. For each point on the loop we have a different vector space and the vector space structure on loops inherits from that on target space.

Here the basis elements are δδX μ(σ)\frac{\delta}{\delta X^\mu(\sigma)} and when I write a linear combination of these I do want to write an integral, so that the result really is a nice derivation on the algebra of fucntions over loop space, as it should be.

I would define a function LfLf at a point γ\gamma in loop space as a direct sum of functions ff at points along a loop in target space. Again, the function can be multi-valued as seen from target space, i.e.

(3)Lf(γ)=pS 1f(γ(p)). Lf(\gamma) = \underset{p\in S^1}{\bigoplus} f(\gamma(p)).

With this definition, my tangent vectors on loop space are certainly derivation of functions on loop space, which is directly inheritted from target space, i.e.

(4)LU(LfLg)| γ=LU(Lf)| γLg(γ)+Lf(γ)LU(Lg)| γ. \left. LU (Lf Lg) \right|_\gamma = \left. LU (Lf)\right|_\gamma Lg(\gamma) + Lf(\gamma) \left. LU (Lg)\right|_\gamma.

Before I proceed, please remember that I am just trying to understand this stuff and not trying to cause trouble :) In fact, this whole thing is just my way of attempting to rewrite what you have already done. From what you say, I suspect that I am further away from understanding than I thought I was. Nonetheless, I don’t think the construction I am building is vaccuous and uninteresting. In fact, it may end up being equivalent once we start computing expectation values, which is my goal. So I appreciate your patience even though it may not be that obvious why I’m bothering with this :)

Second but: I do need modes of vector fields other than the rep Killing vector. I do not understand why you want to ignore most of the vector fields that there are on loop space.

Actually, I wouldn’t say that I necessarily want to ignore them. Not yet anyway. I may decide to ignore them later if there appears a reason to do so. It might be possible that this tangent vector you say you need can be replaced by

(5)s[0,2π]e insX μ| γσ(s). \underset{s\in [0,2\pi]}{\bigoplus} e^{ins} \left. \frac{\partial}{\partial X^\mu} \right|_{\gamma\circ\sigma(s)}.

Granted LfLf is not what one might usually think of as a 0-form on loop space because it results in a string of scalar numbers when evaluation at a point γ\gamma in loop space, but we can probably correct this if necessary by adding some measure to the loop and integrating the numbers to a single value. Something like a trace around the loop. Using notation similar to our notes, this would be something like

(6)1|Lf(γ). \langle 1|Lf(\gamma)\rangle.

Before leaving, let me also say that everything I am trying to do here is motivated by a sentence in your deformation paper

The tangent space T XℒℳT_X\mathcal{LM} of ℒℳ\mathcal{LM} at a loop X:S 1X: S^1\to\mathcal{M} is the space of vector fields along that loop.

My equation

(7)LU| γ=pS 1U| γ(p) \left. LU\right|_\gamma = \underset{p\in S^1}{\bigoplus} \left. U\right|_{\gamma(p)}

is nothing more than my attempt to write down what I thought “vector fields along a loop” should look like.

Eric

Posted by: Eric on July 26, 2004 8:07 PM | Permalink | Reply to this

Parameterized vs Unparameterized Loops

Hi Urs,

I have started writing up some notes on loop space differential geometry. I think you will like them very much. Especially all the figures :)

I am very happy to announce that I have finally come to a clear understanding about something that has been causing us frustrating communication problems :)

The key for us to understand one another is for us to understand these essential points:

What I have been calling an “unparameterized loop” is a map

(1)γ:S 1, \gamma:S^1\to\mathcal{M},

where S 1S^1 is to be thought of as an intrinsic smooth manifold not necessarily embedded in R 2R^2. In particular, there is no predefined parameterization of S 1S^1. Conceptually, I think of S 1S^1 as something like a loop of thread that can be tossed around and smoothly deformed without changing its nature, i.e. it is just a bunch of points making up a closed loop. I certainly do not think of S 1S^1 as having a pre-existing parameterization. It is parameterizable, not parameterized. This is a crucial point and is why I refer to a map

(2)γ:S 1 \gamma:S^1\to\mathcal{M}

as an unparameterized loop. Aside from some manageable details, we can define a parameterization of S 1S^1 as a diffeomorphism

(3)σ:R/2πS 1. \sigma:R/{2\pi}\to S^1.

The loops you are dealing with are what I would consider to be a composition of a parameterization of S 1S^1 followed by an unparameterized loop, i.e.

(4)X=γσ:R/2π. X = \gamma\circ\sigma:R/{2\pi}\to\mathcal{M}.

I am pretty sure that you would agree with everything I’ve said up to this point. But here is something crucial that I think we’ve both misunderstood (or I haven’t explained myself clearly enough yet) about my version of unparameterized loop space so far. Given two distinct unparameterized loops

(5)γ,γ:S 1 \gamma,\gamma':S^1\to\mathcal{M}

we could still have their images coincide in target space, i.e.

(6)γ(S 1)=γ(S 1) \gamma(S^1) = \gamma'(S^1)

even though we may have

(7)γ(p)γ(p) \gamma(p) \ne \gamma'(p)

for all pS 1p\in S^1. Because of this, I can still define vector fields KK corresponding to your Killing vectors in my unparameterized loop space. In fact, I was puzzled by why you thought I couldn’t define KK on my loop space. The confusion is most definitely my fault because I never gave you a precise definition of my loop space (then again you never gave me a precise definition of your’s either! :)). Worse, I probably gave you conflicting information as my ideas were evolving. That is the cost for brainstorming in the open like this, but I think the reward far outweighs the cost, so I will continue providing half-baked ideas here. When an idea begins to solidify, we can write things up more formally. That is what I’m doing now. I’m pretty sure you will like the result (I hope!) :)

Best wishes,
Eric

PS: Here is a parting thought…

If

(8)γ,γ:S 1 \gamma,\gamma':S^1\to\mathcal{M}

are distinct unparameterized loops and

(9)σ,σ:R/2πS 1. \sigma,\sigma':R/{2\pi}\to S^1.

are distinct parameterizations of S 1S^1, then you might still end up with

(10)X=γσ=γσ. X = \gamma\circ\sigma = \gamma'\circ\sigma'.

In other words, the way that γ\gamma differs from γ\gamma' might be compensated in the way σ\sigma differs from σ\sigma'. This subtlety may complicate a comparison of our frameworks.

Posted by: Eric on August 2, 2004 3:05 PM | Permalink | Reply to this

Re: Parameterized vs Unparameterized Loops

Hi Urs,

Last night, I reread all 92 posts in this thread so far :) Now that I am writing up these semi-formal notes, things are becoming more and more clear.

Just for the record, I see that when you have been talking about unparameterized loop space, you are often talking about loop space where all points in loop space that have the same image in target space are identified. At one point, I suggested this idea for “loop space of loop space.” For terminology, let me refer to this as “reduced loop space”, i.e. in reduced loop space, two points are identified if they have the same image in target space.

Let me just clarify that there is an intermediate loop space between your space of parameterized loops and that of reduced loop space. That is the space of all “loops”, where I define a loop as a smooth map

(1)γ:S 1 \gamma:S^1\to\mathcal{M}

and we are to think of S 1S^1 intrinsically as a manifold without a predefined parameterization. On the other hand, a parameterized loop is a smooth map

(2)X:R/2π. X:R/{2\pi}\to\mathcal{M}.

Therefore, it seems there are three versions of loop space we can deal with:

1.) Loop Space
2.) Parameterized Loop Space
3.) Reduced Loop Space

Since a loop γ:S 1\gamma: S^1\to\mathcal{M} and a parameterized loop X:R/2πX: R/{2\pi}\to\mathcal{M} are related by a diffeomorphism

(3)Σ:R/2πS 1, \Sigma: R/{2\pi}\to S^1,

I am beginning to think that loop space and parameterized loop space are actually equivalent :) If that is the case, the I propose we work with loop space, i.e. loops

(4)γ:S 1. \gamma:S^1\to\mathcal{M}.

This is the approach I am taking in our notes and as I work out the details, it seems I am getting essentially the same kind of machinery that you’ve been working with all along, but the parameterization free formulation seems a bit more natural. Everything that you discuss is still perfectly fine, you would just preface everything by saying something like “in a coordinate chart and for a particular choice of parameterization, we have…”

That’s all for now…

Eric

Posted by: Eric on August 3, 2004 4:17 PM | Permalink | Reply to this

Loop Space Differential Geometry

It’s a start :)

Loop Space Differential Geometry
Urs Schreiber and Eric Forgy

Abstract: These are informal notes on loop-space methods resulting from a series of discussions between the authors.

1. Loop Space

Coordinates are evil.

One of the most ironic things a student encounters when first learning differential geometry is the amazing effort that is made in defining all the geometrical gadgets, e.g. manifolds and tensor fields, in terms of coordinates and parameterizations only to spend an equally amazing amount of effort subsequently proving that all of the gadgets were independent of the coordinates and parameterizations chosen in the first place. This irony is even more prevalent when considering the role coordinates and parameterizations play when dealing with loop spaces.

One of the primary purposes of differential geometry is to provide a means to study intrinsic properties of manifolds without thinking of them as being embedded in IR n\IR^n for some sufficiently large nn. With this in mind, let’s have our first

Definition 1.1 Given a manifold \mathcal{M}, a loop is a smooth map γ:S 1\gamma: S^1\to\mathcal{M}.

Although S 1S^1 may be thought of as being embedded in IR 2\IR^2, we will instead think of it intrinsically as a manifold with no predefined parameterization. In other words, S 1S^1 is parameterizable, but not yet parameterized.

Definition 1.2 A parameterization of S 1S^1 is a diffeomorphism Σ:IR/2πS 1\Sigma:\IR/{2\pi}\to S^1.

Definition 1.3 A parameterized loop is a smooth map γΣ:IR/2π\gamma\circ\Sigma:\IR/{2\pi}\to\mathcal{M} for some loop γ\gamma and a specific choice of parameterization Σ:IR/2πS 1\Sigma: \IR/{2\pi}\to S^1.

Definition 1.4 Loop space ℒℳ\mathcal{LM} is the set of all loops γ:S 1\gamma:S^1\to\mathcal{M}.

Definition 1.5 The loop map L:ℒℳL:\mathcal{LM}\to\mathcal{M} is defined pointwise via L(γ)=γ(S 1)L(\gamma) = \gamma(S^1).

In other words, for each point γℒℳ\gamma\in\mathcal{LM}, there is a corresponding loop L(γ)L(\gamma)\subset\mathcal{M} (see Figure 1).

Figure 1: A point in loop space ℒℳ\mathcal{LM} corresponds to a loop in \mathcal{M} via the loop map L:ℒℳL:\mathcal{LM}\to\mathcal{M}.

1.1 Vector Fields

Tangent vectors are easily envisioned as being tangent to some curve on a manifold. This is also true for tangent vectors on loop space. However, a curve on loop space corresponds to sweeping along a loop in target space so the relation between tangent vectors on loop space and those on target space is a little more involved.

Consider the curve on the left side of Figure 2 extending from the point γ\gamma to the point γ\gamma' in loop space. The point γ\gamma in loop space corresponds to the outer loop L(γ)L(\gamma) in target space, while the point γ\gamma' in loop space corresponds to the inner loop L(γ)L(\gamma') in target space as shown in the right side of the Figure. As the curve is traversed in loop space, each point on L(γ)L(\gamma) traverses its own curve in target space so that the tangent vector X| γ\left. X\right|_\gamma on loop space corresponds to a continuum of tangent vectors in target space as illustrated. In other words, we have a generalized notion of the push forward of tangent vectors

(1)L *:T γ(ℒℳ) pS 1T γ(p)(), L_*: T_\gamma(\mathcal{LM})\to\bigoplus_{p\in S^1} T_{\gamma(p)}(\mathcal{M}),

where each tangent vector X| γT γ(ℒℳ)\left. X\right|_\gamma\in T_\gamma(\mathcal{LM}) gets sent to a continuum direct sum of corresponding tangent vectors in target space.

Figure 2: A tangent vector X| γ\left. X\right|_\gamma on loop space pushes forward to a continuum direct sum of tangent vectors on target space.

The need for a direct sum can be motivated by considering a tangent vector X| γ\left. X\right|_\gamma at a self-intersecting loop. If γ\gamma is self intersecting, i.e. if there are distinct points p,qS 1p,q\in S^1 with γ(p)=γ(q)\gamma(p) = \gamma(q), then the tangent vector X| γ\left. X\right|_\gamma will in general map to two distinct tangent vectors in T γ(p)()=T γ(q)()T_{\gamma(p)}(\mathcal{M}) = T_{\gamma(q)}(\mathcal{M}) as illustrated in Figure 3. Consequently, if XX is a vector field on loop space, then L *(X)L_*(X) will, in general, not correspond to a vector field on target space due to this multi-valuedness.

Figure 3: A tangent vector X| γ\left. X\right|_\gamma on loop space pushes forward to multi-valued tangent vectors on target space when the loop is self intersecting.

1.2 Equivalent Loops

As defined, there is a certain amount of redundancy built into loop space. To see this, consider two distinct loops

(2)γ,γ:S 1 \gamma,\gamma':S^1\to\mathcal{M}

having the same image in target space, i.e.

(3)L(γ)=L(γ). L(\gamma) = L(\gamma').

This happens whenever we have any diffeomorphism

(4)F:S 1S 1 F: S^1\to S^1

and set

(5)γ=γF. \gamma' = \gamma\circ F.

It is tempting to introduce a reduced loop space ℒℳ/~\mathcal{LM}/~ by defining an equivalence relation

(6)γ~γL(γ)=L(γ), \gamma ~ \gamma' \Leftrightarrow L(\gamma) = L(\gamma'),

which identifies these equivalent loops. For the time being, we will content ourselves by simply noting this redundancy and thinking of it as a new kind of gauge freedom. In the remainder, we will deal with the full, non-reduced, loop space.

1.3 Equivalent Vector Fields

With the redundancy on loop space as discussed in the previous Subsection, it follows that for each γℒℳ\gamma\in\mathcal{LM} there will be a loop subspace

(7)ℒℳ| γ={γℒℳ|L(γ)=L(γ)}. \left. \mathcal{LM}\right|_\gamma = \{\gamma'\in\mathcal{LM}\quad |\quad L(\gamma') = L(\gamma)\}.

Any curve in ℒℳ| γ\left. \mathcal{LM}\right|_\gamma will, by definition, connect equivalent loops having the same image in target space. The tangent vectors for such curves push forward to distinguished collections of tangent vectors on target space.

Since the image of equivalent loops in loop space is the same loop in target space, as the curve is traversed in loop space, the corresponding points in target space sweep around the single loop. Therefore, pushing forward a tangent vector K| γ\left. K\right|_\gamma on such a curve in loop space gives rise to a collection of tangent vectors in target space, each of which is tangent to the given loop (see in Figure 4.

Figure 4: A tangent vector K| γ\left. K\right|_\gamma on a curve connecting equivalent loops pushes forward to tangent vectors on target space that are all tangent to the loop.

If two points γ,γℒℳ\gamma,\gamma'\in\mathcal{LM} have the same image in target space, we would expect such points to be physically indistinguishable. We’ve just seen that this redundancy in loop space gives rise to tangent vectors in target space that are tangent to the loop. This should make us suspect that only tangent vectors in target space that are transverse to the loop are physically relevant, i.e.

(8)L *:[T γ(ℒℳ)] physical pS 1[T γ(p)()] transverse. L_*: \left[T_\gamma(\mathcal{LM})\right]_{\text{physical}}\to\bigoplus_{p\in S^1} \left[T_{\gamma(p)}(\mathcal{M})\right]_{\text{transverse}}.

1.4 Coordinate Bases

Given a point γℒℳ\gamma\in\mathcal{LM} whose image L(γ)L(\gamma) lies completely within a coordinate patch on \mathcal{M}, a tangent vector U| γ\left. U\right|_\gamma pushed forward to a continuum direct sum of tangent vectors on target space may be expressed as

(9)L *(U| γ)= pS 1U μ(γ(p))x μ| γ(p). L_*(\left. U\right|_\gamma) = \bigoplus_{p\in S^1} U^\mu(\gamma(p)) \left. \frac{\partial}{\partial x^\mu}\right|_{\gamma(p)}.

Furthermore, if a specific parameterization

(10)Σ:IR/2πS 1 \Sigma:\IR/{2\pi}\to S^1

is chosen, the push forward may also be expressed as

(11)L *(U| γ)= σIR/2πU (μ,σ)(γ)x μ| γΣ(σ), L_*(\left. U\right|_\gamma) = \bigoplus_{\sigma\in \IR/{2\pi}} U^{(\mu,\sigma)}(\gamma) \left. \frac{\partial}{\partial x^\mu}\right|_{\gamma\circ\Sigma(\sigma)},

where

(12)U (μ,σ)(γ)=U μ(γΣ(σ)). U^{(\mu,\sigma)}(\gamma) = U^\mu(\gamma\circ\Sigma(\sigma)).

This choice of notation is made because we would like to express U| γ\left. U\right|_\gamma on loop space via

(13)U| γ=U (μ,σ)(γ)x (μ,σ)| γ \left. U\right|_\gamma = U^{(\mu,\sigma)}(\gamma) \left. \frac{\partial}{\partial x^{(\mu,\sigma)}}\right|_\gamma

so that σ\sigma takes on the role of a continuum index for summation. This motivates us to write

(14)L *(x (μ,σ)| γ)=x μ| γΣ(σ), L_*\left(\left. \frac{\partial}{\partial x^{(\mu,\sigma)}}\right|_\gamma\right) = \left. \frac{\partial}{\partial x^\mu}\right|_{\gamma\circ\Sigma(\sigma)},

which takes care of the individual basis elements, but we still need to specify how we are to combine the continuum of basis elements into a general tangent vector on loop space.

For the time being, lets drop the implied summation over σ\sigma and write this as an integral over a measure dΩ[γΣ(σ)]d\Omega[\gamma\circ\Sigma(\sigma)] that is yet to be determined, i.e.

(15)U| γ= 0 2πdΩ[γΣ(σ)]U (μ,σ)(γ) (μ,σ)| γ, \left. U\right|_\gamma = \int_0^{2\pi} d\Omega[\gamma\circ\Sigma(\sigma)] U^{(\mu,\sigma)}(\gamma) \left. \partial_{(\mu,\sigma)}\right|_\gamma,

where we have set

(16) (μ,σ)| γ=x (μ,σ)| γ. \left. \partial_{(\mu,\sigma)}\right|_\gamma = \left. \frac{\partial}{\partial x^{(\mu,\sigma)}}\right|_\gamma.

The measure should be chosen so that

(17)U| γ= 0 2πdΩ[γΣ(σ)]U (μ,σ)(γ) (μ,σ)| γ= 0 2πdΩ[γΣ(σ)]U (μ,σ)(γ) (μ,σ)| γ, \left. U\right|_\gamma = \int_0^{2\pi} d\Omega[\gamma\circ\Sigma(\sigma)] U^{(\mu,\sigma)}(\gamma) \left. \partial_{(\mu,\sigma)}\right|_\gamma = \int_0^{2\pi} d\Omega[\gamma\circ\Sigma'(\sigma')] U^{(\mu',\sigma')}(\gamma) \left. \partial_{(\mu',\sigma')}\right|_\gamma,

which would allow is to bring back the implied summation resulting in the simplified expression

(18)U| γ=U (μ,σ)(γ) (μ,σ)| γ=U (μ,σ)(γ) (μ,σ)| γ.(*) \left. U\right|_\gamma = U^{(\mu,\sigma)}(\gamma) \left. \partial_{(\mu,\sigma)}\right|_\gamma = U^{(\mu',\sigma')}(\gamma) \left. \partial_{(\mu',\sigma')}\right|_\gamma.\quad\quad\quad(*)

If we push forward both expansions for the tangent vector to target space, we find that in order to satisfy Equation (*), we must have

(19)dΩ[γΣ(σ)]=dΩ[γΣ(σ)], d\Omega[\gamma\circ\Sigma(\sigma)] = d\Omega[\gamma\circ\Sigma'(\sigma')],

i.e. the measure should be independent of the parameterization. The only measure that is independent of the choice of parameterization would be proportional to the volume form. Therefore, we will set

(20)dΩ[γΣ(σ)]=T γds[γΣ(σ)]=T γdσs[γΣ(σ)]σ, d\Omega[\gamma\circ\Sigma(\sigma)] = T_\gamma ds[\gamma\circ\Sigma(\sigma)] = T_\gamma d\sigma \frac{\partial s[\gamma\circ\Sigma(\sigma)]}{\partial\sigma},

where dsds is the line element, i.e. volume form on the loop, and T γT_\gamma is some constant so that

(21)U| γ=T γ 0 2πdσs[γΣ(σ)]σU (μ,σ)(γ) (μ,σ)| γ. \left. U\right|_\gamma = T_\gamma \int_0^{2\pi} d\sigma \frac{\partial s[\gamma\circ\Sigma(\sigma)]}{\partial\sigma} U^{(\mu,\sigma)}(\gamma) \left. \partial_{(\mu,\sigma)}\right|_\gamma.

Finally, we can use the above as the implied summation so that we arrive at the desired expression

(22)U| γ=U (μ,σ)(γ) (μ,σ)| γ, \left. U\right|_\gamma = U^{(\mu,\sigma)}(\gamma) \left. \partial_{(\mu,\sigma)}\right|_\gamma,

where

(23)U (μ,σ)(γ) (μ,σ)| γT γ 0 2πdσs[γΣ(σ)]σU (μ,σ)(γ) (μ,σ)| γ. U^{(\mu,\sigma)}(\gamma) \left. \partial_{(\mu,\sigma)}\right|_\gamma\implies T_\gamma \int_0^{2\pi} d\sigma \frac{\partial s[\gamma\circ\Sigma(\sigma)]}{\partial\sigma} U^{(\mu,\sigma)}(\gamma) \left. \partial_{(\mu,\sigma)}\right|_\gamma.

For points γℒℳ\gamma\in\mathcal{LM} such that L(γ)L(\gamma) lies entirely within a given coordinate patch, this allows us to formally manipulate vector fields

(24)U=U (μ,σ) (μ,σ) U = U^{(\mu,\sigma)} \partial_{(\mu,\sigma)}

on loop space as we are accustomed to on target space. The main difference being that we have a continuum of indices. It should be noted, however, that additional care would be necessary if target space did not admit a single coordinate patch that covers the entire manifold. If L(γ)L(\gamma) extended beyond the coordinate patch, this notation would obviously cease to be valid.

Posted by: Eric on August 3, 2004 9:01 PM | Permalink | Reply to this

Re: Loop Space Differential Geometry

Hi Urs,

I wish you were here :)

I’ve been working hard on this write up. I was at the point where I could finally write down Stokes’ theorem on loop space. But, alas, the way I have defined things, it doesn’t seem to work out.

As you know, by the time you read this anyway, that I have been making tons of arguments about choices of measures when converting a sum to an integral over indices. I had argued that the correct measure should be

(1)U (μ,σ) (μ,σ)T γ 0 2πdσs[γΣ(σ)]σU (μ,σ) (μ,σ) U^{(\mu,\sigma)} \partial_{(\mu,\sigma)}\implies T_\gamma \int_0^{2\pi} d\sigma\frac{\partial s[\gamma\circ\Sigma(\sigma)]}{\partial\sigma} U^{(\mu,\sigma)} \partial_{(\mu,\sigma)}

As of this moment, the way things are defined in the paper, I can’t see how this is going to allow us to write down Stokes’ theorem.

However, in my dispair, I noticed that if we simply use the measure you have been using all along, i.e.

(2)U (μ,σ) (μ,σ) 0 2πdσU (μ,σ) (μ,σ), U^{(\mu,\sigma)} \partial_{(\mu,\sigma)}\implies \int_0^{2\pi} d\sigma U^{(\mu,\sigma)} \partial_{(\mu,\sigma)},

then I can prove Stokes’ theorem on loop space in a few lines.

Grumble!! How can this be?!?! :)

If this is true, then it could mean that we really do NOT want loop space to be the space of maps

(3)γ:S 1 \gamma: S^1\to\mathcal{M}

and we really do want to work with parameterized loop space being the space of all parameterized loops

(4)X:IR/2π X:\IR/{2\pi}\to\mathcal{M}

after all.

If this turns out to be the case, I doubt you will be very surprised :) On the bright side, if Stokes’ theorem dictates that we work on parameterized loop space, then that would be another notch on the ax for Stokes’ theorem :)

Hmm…

Eric

PS: Fortunately, the way I developed the paper, changing it from loop space to parameterized loop space(and hence to agree with your papers) will require only very minor modifications. I think it will make a very nice little paper. Especially the bit about Stokes’ theorem on loop space.

Posted by: Eric on August 6, 2004 8:39 AM | Permalink | Reply to this

Re: Loop Space Differential Geometry

PS: Fortunately, the way I developed the paper, changing it from loop space to parameterized loop space(and hence to agree with your papers) will require only very minor modifications.

As promised, it didn’t take long to make the changes to parameterized loop space. Now I have two versions of the paper. I’m still a little befuddled why I couldn’t get my unparameterized loop space Stokes’ theorem to work out. I don’t give up that easily, so I may still try to find a way. At least it is good to have something that works as a backup :)

If we do end up going with parameterized loop space, which seems likely, then my arguments about the measure being parameter dependent will be moot because EVERYTHING in sight will be parameter dependent. This is probably what you have been saying all along :)

Sorry I’m a “little” slow :)

The good news is that if we write up some notes on loop space differential geometry that even I can understand, then it means anybody could understand it :) This was the goal from the beginning.

As a parting thought, I just wrote a very cute section

1.4.3 Fundamental Theorem of Loop Calculus

(1) Γdϕ = 0 1trdϕ| LΓ(t),ddt| LΓ(t)dt = 0 1[12π 0 2πdϕ| L(σ)Γ(t),ddt| L(σ)Γ(t)dσ]dt =12π 0 2π[ 0 1dϕdt| L(σ)Γ(t)dt]dσ =12π 0 2π[ϕ| L(σ)Γ(1)ϕ| L(σ)Γ(0)]dσ =tr(ϕ| LΓ(1))tr(ϕ| LΓ(0)) \array{ \arrayopts{\colalign{right left}} \int_\Gamma d\phi {} & = \int_0^1 \tr \langle\left. d\phi\right|_{L\circ\Gamma(t)},\left.\frac{d}{dt}\right|_{L\circ\Gamma(t)}\rangle dt \\ {} & = \int_0^1 \left[\frac{1}{2\pi}\int_0^{2\pi} \langle\left. d\phi\right|_{L(\sigma)\circ\Gamma(t)},\left.\frac{d}{dt}\right|_{L(\sigma)\circ\Gamma(t)}\rangle d\sigma\right] dt \\ {} & = \frac{1}{2\pi}\int_0^{2\pi} \left[\int_0^1 \left.\frac{d\phi}{dt}\right|_{L(\sigma)\circ\Gamma(t)} dt\right] d\sigma \\ {} & = \frac{1}{2\pi}\int_0^{2\pi} \left[\left.\phi\right|_{L(\sigma)\circ\Gamma(1)}-\left.\phi\right|_{L(\sigma)\circ\Gamma(0)}\right] d\sigma \\ {} & = \tr\left(\left.\phi\right|_{L\circ\Gamma(1)}\right) - \tr\left(\left.\phi\right|_{L\circ\Gamma(0)}\right) }

so that we have the fundamental theorem of loop calculus

(2) Γdϕ= Γϕ. \int_\Gamma d\phi = \int_{\partial\Gamma} \phi.

Phew! It took me longer to get that equation aligned correctly in WebTeX than it took me to make all the changes in the paper :)

Of course you will need to see the paper to understand all the notation, but you can get an idea for how the mechanics works from the equation.

Eric

Posted by: Eric on August 6, 2004 4:02 PM | Permalink | Reply to this

Re: Loop Space Differential Geometry

Hi Urs,

I imagine you will be returning soon. I hope you had a nice escape while it lasted :) The notes are coming along nicely. Of course the progress is slow, but there is progress.

I’m still a little befuddled why I couldn’t get my unparameterized loop space Stokes’ theorem to work out.

I have found a new map to be pretty handy that helps explain this a little bit. For some σIR/2π\sigma\in\IR/{2\pi}, let

(1)L σ:ℒℳ L_\sigma:\mathcal{LM}\to\mathcal{M}

be defined by

(2)L σ(γ)=γ(σ). L_\sigma(\gamma) = \gamma(\sigma).

This map is only defined for parameterized loop space. For a curve

(3)Γ:[0,1]ℒℳ \Gamma:[0,1]\to\mathcal{LM}

on parameterized loop space, we have a family of curves

(4)L σΓ:[0,1] L_\sigma\circ\Gamma:[0,1]\to\mathcal{M}

on target space. The existence of this family of curves is special to parameterized loop space and is what allows us to push forward a tangent vector on loop space to a continuum direct sum of tangent vectors defined along a loop on target space.

For the space of maps

(5)γ:S 1 \gamma:S^1\to\mathcal{M}

there is no such natural way to define the push forward of tangent vectors on loop space to target space.

This is one compelling reason, among others, why we need to work with the space of parameterized loops. Kind of neat :)

Ok. I am now officially convinced that we should be dealing with parameterized loop space :) Having the ability to push forward tangent vectors is crucial in order to be able to define integration. Integration is crucial to physics because every measurement involves integration in some way or other. Therefore, if loop space is to have anything to do with physics, it should be parameterized loop space. Ahhh… enlightenment :)

Eric

Posted by: Eric on August 12, 2004 3:14 AM | Permalink | Reply to this

Re: Loop Space Differential Geometry

Hi Eric -

I am back from vacation and ready to do some work again.

I have had a look at your comments. As you note, apparently I didn’t fully understand in which sense you were talking about unparameterized loop space. Indeed, to me unparameterized loop space was the space of equivalence classes of maps from (0,2π](0,2\pi] into target space, where maps are taken to be equivalent if they they have the same image in target space.

I believe that this is a very awkward space to deal with directly.

I see that you have convinced yourself of some of the merits of parameterized loop space. But I am surprised about what you call Stokes’ theorem on loop space. Seems to me that what you wrote down as section 1.4.3 is some fancy version of Stokes’ theorem (the ‘fundamental theorem’) on a single loop not on loop space.

Stokes’ theorem on parameterized loop space must involve some integration over loops, not over points on a single loop. And it can, formally, be written down easily, since when treating σ\sigma as a continuous index everything looks excactly as in finitely many dimensions. The only subtlety is to take care that the formal expressions used this way are well defined. But that part can be dealt with by writing functions on loop space as formal power series, the way it was done by Chen, as described in hep-th/0401215, which we talked about before. If you/we are serious about writing some mathy stuff about loop space differential geometry it would probably be indispensable to first carefully read Chen’s book on this topic. I suspect that most of what you are currently thinking about has been done there, already.

I’ll see if I can get ahold of a used copy of this book (since it is apparently out of print). Maybe the papers collected in that book are even available online somewhere?

Posted by: Urs Schreiber on August 12, 2004 4:13 PM | Permalink | PGP Sig | Reply to this

Re: Loop Space Differential Geometry

I am back from vacation and ready to do some work again.

Hey hey! :) Good to see you back :)

Seems to me that what you wrote down as section 1.4.3 is some fancy version of Stokes’ theorem (the ‘fundamental theorem’) on a single loop not on loop space.

No worries, there are many details missing in that blurb. Later tonight, I will email a copy of my notes. With some more work and your blessing, maybe we can make them available on the arxives.

For the time being, let me just reassure you that in Section 1.4.3 above, dϕd\phi is a 1-form on loop space and

(1)Γ:[0,1]ℒℳ \Gamma:[0,1]\to\mathcal{LM}

is a curve on loop space. You’ll see right away how it works when you have the notes in your hands. I tried to be thorough and to make it self-contained, so it should hopefully be readible (or I failed miserably :)).

It will be necessary to be aware of Chen’s work for references, but I am having too much fun discovering this stuff on my own to ruin the fun and read his papers. If everything I do turns out to be unoriginal, that is fine. We can just promote it as a “review” article. Hopefully, I can add some new insights that might help others learn the material at least.

Best wishes,
Eric

Posted by: Eric on August 12, 2004 7:31 PM | Permalink | Reply to this

Sigma

Hi Urs,

I actually spent some time this weekend trying to make some changes to the notes you suggested, but I don’t feel like I accomplished much. A big reason being that since my accident 3 weeks ago, I haven’t been able to get a good night’s sleep. This cast has got my wrist bent down at just the right angle to make it impossible to find a comfortable position to sleep. The best I can do is catch several “naps” throughout the night. I was also napping throughout the day Saturday and today, which is not very conducive to work.

With my excuses out of the way, back to the fun stuff :)

The thing I spent the most time meditating about this weekend was the issue of “mixed forms”, i.e. forms obtained by wedging coordinate basis 1-forms at different values of σ\sigma, e.g.

(1)dx (μ,σ)dx (ν,σ), dx^{(\mu,\sigma)}\wedge dx^{(\nu,\sigma')},

where σσ\sigma\ne\sigma'. Of course I can appreciate the desire to simply postulate that σ\sigma take on the role of a continuum index, in which case the above would be a natural thing to do, but I haven’t been able to convince myself yet that doing so is completely justified, or perhaps “motivated” is a better word.

I could be wrong, but skimming your deformation paper, I didn’t find anything that explicitly makes use of distinct σ\sigma’s. In fact, unless I missed something, where distinct σ\sigma and σ\sigma' appear, they are accompanied by a δ(σ,σ)\delta(\sigma,\sigma').

If that is true, then it would seem that we could, in principle, rewrite that paper using exclusively the concept that I refer to in the notes as “loop tensor fields,” i.e. tensors defined along a loop in target space, e.g. a loop vector field X| L(γ)\left. X\right|_{L(\gamma)} along the loop L(γ)L(\gamma) in target space is defined by

(2)X| L(γ)=σIR/2πX| γ(σ), \left. X\right|_{L(\gamma)} = \underset{\sigma\in\IR/{2\pi}}{\bigoplus} \left. X\right|_{\gamma(\sigma)},

where

(3)γ:IR/2π \gamma:\IR/{2\pi}\to\mathcal{M}

is a parameterized loop, i.e. a point in loop space, and

(4)L:ℒℳ L:\mathcal{LM}\to\mathcal{M}

is the loop map which takes a point in loop space to its image in target space.

To put it another way, which is probably easier to refute, is that it seems like nothing would be changed if we set

(5) (μ,σ) (ν,σ)=δ(σ,σ) (μ,σ) (ν,σ) \mathcal{E}^{\dagger(\mu,\sigma)} \mathcal{E}^{\dagger(\nu,\sigma')} = \delta(\sigma,\sigma') \mathcal{E}^{\dagger(\mu,\sigma)} \mathcal{E}^{\dagger(\nu,\sigma)}

and

(6) (μ,σ) (ν,σ)=δ(σ,σ) (μ,σ) (ν,σ). \mathcal{E}_{(\mu,\sigma)} \mathcal{E}_{(\nu,\sigma')} = \delta(\sigma,\sigma') \mathcal{E}_{(\mu,\sigma)} \mathcal{E}_{(\nu,\sigma)}.

Then we’d still have

(7){ (μ,σ), (ν,σ)}=0, \{\mathcal{E}^{\dagger(\mu,\sigma)}, \mathcal{E}^{\dagger(\nu,\sigma')}\} = 0,
(8){ (μ,σ), (ν,σ)}=0, \{\mathcal{E}_{(\mu,\sigma)}, \mathcal{E}_{(\nu,\sigma')}\} = 0,
(9){ (μ,σ), (ν,σ)}=δ (μ,σ) (ν,σ) \{\mathcal{E}_{(\mu,\sigma)}, \mathcal{E}^{\dagger(\nu,\sigma')}\} = \delta^{(\nu,\sigma')}_{(\mu,\sigma)}

as before.

If you could demonstrate that this would lead to results that are incompatible with the paper, that would help me understand what I am missing because right now I am thinking that it is, in fact, compatible.

Eric

Posted by: Eric on August 16, 2004 2:49 AM | Permalink | Reply to this

Re: Sigma

Hi Eric -

I am feeling with you concerning your wrist. A couple of years ago I broke my right foot and I know how such a seemingly peripheric injury can bedevil body and soul.

I volunteered to baby-sit my mother’s dog today, which makes me spend a rather unproductive but very relaxing day, mostly filled up with driving by bike through the park like a madman - with the dog still being faster than me.

But my dad has an age old PC, and I see if I can successfully use it to post to the SCT.

You are right that many nice and useful objexts on loop space involve single integrals over loops only. But that’s just because that’s the simplest case. When you look at my 2-form paper or the one in preparation on super-Pohlmeyer stuff, you’ll see that for instance when nonabelian things come into play multi-integrals (or ‘iterated integrals’ in Chen’s language) are unavaoidable, and these in general involve the ‘mixed forms’ that you are talking about.

And that’s no wonder. After all, as I have mentioned in private mail recently, when you take two generic 1-forms on loop space and wedge them together, the result contains ‘mixed’ forms. There is nothing scary about these.

Also notice that I am not trying to ‘postulate’ that sigma plays the role of an index. This is a result, not an axiom. We talked about how a vector looks like on loop space, and by looking at the resulting expressions you see that the integral over sigma plays precisely the same role as the sum over the indices. The Polygon-space approximation makes it maybe easier to see why this must be true. Here the space one is working on is essentially the direct sum of several copies of target space, and this direct sum obviously induces indices that involve several copies of the original indices.

So let’s try to approach this systematically. I am claiming that once we know what a vector on loop space is everything about rank (m,n) tensors follows. Do you agree with what I wrote about loop space vectors last time? Can you see how the tensor product of two such vectors necessarily (in general) involves ‘mixed’ elements? Let me know which of these steps you do not agree with.

Posted by: Urs on August 16, 2004 3:17 PM | Permalink | Reply to this

Re: Sigma

Hi Urs :)

Thanks for the image of you being pulled along behind your mom’s dog. Injuries aside, work has been extremely unpleasant lately so anything to make me chuckle is more than welcome :)

So let’s try to approach this systematically. I am claiming that once we know what a vector on loop space is everything about rank (m,n) tensors follows. Do you agree with what I wrote about loop space vectors last time? Can you see how the tensor product of two such vectors necessarily (in general) involves ‘mixed’ elements? Let me know which of these steps you do not agree with.

Ok, but first of all, let me clarify that my conviction one way or another is not strong enough for me to “not agree” with any particular step. Not yet anyway. At the moment, I think a better way to describe the situation is that I don’t yet understand things.

What I think I understand is that a tangent vector X| γ\left. X\right|_\gamma on loop space pushes forward to a loop vector field X| L(γ)\left. X\right|_{L(\gamma)} on target space, as described in my notes. I am pretty sure this is correct, especially since you liked that figure so much :)

I think I also understand that if γℒℳ\gamma\in\mathcal{LM} is a loop for which L(γ)L(\gamma) is contained with a coordinate chart on target space, we can express a covector as

(1)α| γ= 0 2π[α (μ,σ)dx (μ,σ)| γ]dσ, \left.\alpha\right|_\gamma = \int_0^{2\pi} \left[\left.\alpha_{(\mu,\sigma)} dx^{(\mu,\sigma)}\right|_\gamma \right] d\sigma,

where, for the time being, I want to explicitly write down the integral over σ\sigma. From this, it follows that we should have

(2)α| γβ| γ= 0 2π 0 2π[α (μ,σ)β (ν,σ)dx (μ,σ)| γdx (ν,σ)| γ]dσdσ. \left.\alpha\right|_\gamma\otimes \left.\beta\right|_\gamma = \int_0^{2\pi} \int_0^{2\pi} \left[ \alpha_{(\mu,\sigma)}\beta_{(\nu,\sigma')} \left. dx^{(\mu,\sigma)}\right|_\gamma \otimes \left. dx^{(\nu,\sigma')}\right|_\gamma \right] d\sigma d\sigma'.

What I don’t understand is why we do not set

(3)dx (μ,σ)| γdx (ν,σ)| γ=δ(σ,σ)dx (μ,σ)| γdx (ν,σ)| γ \left. dx^{(\mu,\sigma)}\right|_\gamma \otimes \left. dx^{(\nu,\sigma')}\right|_\gamma = \delta(\sigma,\sigma') \left. dx^{(\mu,\sigma)}\right|_\gamma \otimes \left. dx^{(\nu,\sigma)}\right|_\gamma

so that

(4)α| γβ| γ= 0 2π[α (μ,σ)β (ν,σ)dx (μ,σ)| γdx (ν,σ)| γ]dσ. \left.\alpha\right|_\gamma\otimes \left.\beta\right|_\gamma = \int_0^{2\pi} \left[ \alpha_{(\mu,\sigma)}\beta_{(\nu,\sigma)} \left. dx^{(\mu,\sigma)}\right|_\gamma \otimes \left. dx^{(\nu,\sigma)}\right|_\gamma \right] d\sigma.

If you do not do this, then the tensor product on loop space corresponds to a non-local operation on target space, where you combine elements in different cotangent spaces.

If we forget about loop space for a second, there is a place where this kind of nonlocal tensor product appears in standard differential geometry. That is in the Green’s function

(5)G=G(x,x)dxdx. G = G(x,x') dx\otimes dx'.

In the case of EM, we can write things like

(6)A= Gj, A = \int_{\mathcal{M}} G\wedge\star j,

where jj is a current 1-form and AA is the 1-form produced by the source jj. In EM, I would call GG a “multi-form” because it is a form, not on \mathcal{M}, but on ×\mathcal{M}\times\mathcal{M}, i.e.

(7)GΩ 1()Ω 1(). G\in\Omega^1(\mathcal{M})\otimes \Omega^1(\mathcal{M}).

Of course, I would never suggest anything like the Green’s function is not important because it obviously is extremely important. However, I don’t think anyone would think anything wrong if I said that GG was not a form on \mathcal{M}.

Similarly, I am not suggesting that mixed forms, or better “multi forms”, on loop space are unimportant. I am just wondering if such a thing would be more naturally associated with tensor products of spaces of forms on loop space. For example, if you have an application (non-abelian connections being a prime example), where you need things like

(8)dx (μ,σ)dx (ν,σ) dx^{(\mu,\sigma)}\otimes dx^{(\nu,\sigma')}

for distinct values of σ\sigma and σ\sigma', then is it possible that it would be more appropriate to think of this as a form

(9)dx (μ,σ)dx (ν,σ)Ω 1(ℒℳ)Ω 1(ℒℳ) dx^{(\mu,\sigma)}\otimes dx^{(\nu,\sigma')}\in \Omega^1(\mathcal{LM})\otimes \Omega^1(\mathcal{LM})

rather than

(10)dx (μ,σ)dx (ν,σ)Ω 1(ℒℳ)? dx^{(\mu,\sigma)}\otimes dx^{(\nu,\sigma')}\in \Omega^1(\mathcal{LM})?

If my hunch is correct that your deformation paper could be completely rewritten using loop tensor fields on target space, that would add substance to my suspicions.

Again, I’m not convinced either way, I’m just still trying to understand things and this is where I am at present.

Eric

Posted by: Eric on August 16, 2004 5:18 PM | Permalink | Reply to this

Re: Sigma

Thanks for the image of you being pulled along behind your mom’s dog. Injuries aside, work has been extremely unpleasant lately so anything to make me chuckle is more than welcome :)

The full truth is even more amusing than that:

The dog is not on the leash, usually, because she (a female) takes orders and the park/forest is rather large. So she runs around and invents games to play with us. One of these games is called: ‘Bet I am faster through the underwood then you are by bike on the path.’ She really takes the rigid self-invented rules of that game quite serious, to the extend that people stop watching us in amazement.

So we align like in an olympic contest, she in the scrubs, me on the path. She trembles in excitement, her eyes staring at me, but she won’t move until I give the correct sign, like shouting ‘los’ or clapping my hands. When I do, she darts off fiercely. (Sometimes I apparently violate the rules of the game by making some mistake with the start sign. Then she complains about my dumbness, instead of starting to run.)

I’ll try to catch up on my bike. But she reaches a certain point (always the same point that she somehow chooses in the beginning) which marks the end of the race usually before I do (not quite always, though, I am a more serious competitor then my mom…). Then she meets me half way, boasting in her victory - and ready for the next run.

Ok, back to loops. I am glad that you wrote:

What I don’t understand is why we do not set

dx (μ,σ)dx (ν,σ )=δ(σσ )dx (μ,σ)dx (ν,σ )dx^{(\mu,\sigma)}\otimes dx^{(\nu,\sigma^\prime)} = \delta(\sigma-\sigma^\prime) dx^{(\mu,\sigma)}\otimes dx^{(\nu,\sigma^\prime)}

because that allows us to quite precisely address the point under discussion.

My answer is: We don’t do that because we are not free to define what ‘tensor product’ is supposed to mean. If you claim to provide a rank 2 tensor on loop space you are not free to redefine the laws of tensor multiplication.

(Besides, I think the definition you gave is self-inconsistent, because it would imply dx (μ,σ)dx (ν,σ )=δ n(σσ )dx (μ,σ)dx (ν,σ )dx^{(\mu,\sigma)}\otimes dx^{(\nu,\sigma^\prime)} = \delta^n(\sigma-\sigma^\prime) dx^{(\mu,\sigma)}\otimes dx^{(\nu,\sigma^\prime)} for an arbitrary power nn, which is not well defined.)

If you do not do this, then the tensor product on loop space corresponds to a non-local operation on target space, where you combine elements in different cotangent spaces.

Yes! That is the case and very important. For instance in my 2-form paper I noted that gauge transformations on loop space only avoid certain non-localities in the resulting field strength of this sort if a certain consistency condition is met.

Still, these non-localities are there in general and they need to be there.

But the best thing is that you seceretly came to the same conclusion, apparently without noticing ;-) namely you wrote

is it possible that it would be more appropriate to think of this as a form

dx (μ,σ)dx (ν,σ)Ω 1(ℒℳ)Ω 1(ℒℳ) dx^{(\mu,\sigma)}\otimes dx^{(\nu,\sigma)} \in \Omega^1(\mathcal{LM})\otimes \Omega^1(\mathcal{LM})

That’s not only more appropriate, that’s what I am saying all along! :-) Think about this same equation not for forms on loop space but for ordinary forms. Then you will immediately see that this is the very definition of tensor product. The space of rank 2 tensors is the tensor product of the space of rank 1 tensors. That’s precisely how things are defined. And this is what I was getting at when teling you to consider the tensor product of any 2 vectors or 1-forms on loop space. The result ceratainly sits in the above tensor product space and it contains mixed forms!

You continued to write:

rather than

dx (μ,σ)dx (ν,σ)Ω 1(ℒℳ) dx^{(\mu,\sigma)}\otimes dx^{(\nu,\sigma)} \in \Omega^1(\mathcal{LM})

Maybe you have a typo here? As stated, the left hand side is a 2-form, the right hand side the space of 1-forms. But even if you put Ω 2\Omega^2 on the right, we have Ω 2Ω 1Ω 1\Omega^2 \subset \Omega^1 \otimes \Omega^1 and the same as above applies.

Hopefully this convinces you. If not, let me know about further doubts. :-)

Posted by: Urs Schreiber on August 16, 2004 6:42 PM | Permalink | PGP Sig | Reply to this

Re: Sigma

Wait! Wait! I feel like I walked into a trap :)

I made a typo and wrote something that maybe looks correct on accident :)

If you take a look at this paper, you will see that the Greens function is a double form. The reference to double forms points back to de Rham, “Differentiable Manifolds.” Too bad I don’t have this book handy.

I guess a more appropriate way to write the Greens function is maybe

(1)GΩ (1,1)(×) G\in\Omega^{(1,1)}(\mathcal{M}\times\mathcal{M})

rather than

(2)GΩ 1()Ω 1() G\in\Omega^1(\mathcal{M})\otimes\Omega^1(\mathcal{M})

as I mistakenly said in that last post. I hope you agree that

(3)GΩ 1()Ω 1(). G\notin\Omega^1(\mathcal{M})\otimes\Omega^1(\mathcal{M}).

Before I can make my point about multi-forms on loop space, we better try to settle on what are multi-forms on target space.

This is the way I understand it so far, which might not be 100% correct. Say we have manifold \mathcal{M} and 𝒩\mathcal{N} with their respective spaces of forms

(4)Ω()=p=0nΩ p() \Omega(\mathcal{M}) = \overset{n}{\underset{p=0}{\bigoplus}} \Omega^p(\mathcal{M})

and

(5)Ω(𝒩)=p=0nΩ p(𝒩). \Omega(\mathcal{N}) = \overset{n}{\underset{p=0}{\bigoplus}} \Omega^p(\mathcal{N}).

Given a pp-form αΩ p()\alpha\in\Omega^p(\mathcal{M}) and a qq-form βΩ q(𝒩)\beta\in\Omega^q(\mathcal{N}), we can constuct a double (p,q)(p,q)-form

(6)αβΩ (p,q)(×𝒩). \alpha\wedge\beta\in\Omega^{(p,q)}(\mathcal{M}\times\mathcal{N}).

Keep in mind that might not be what one technically should call a double form, but it is what I imagine it should be not having a proper reference in front of me.

Probably a better thing to do would be to define injection maps

(7)ι 1:Ω p()Ω (p,0)(×𝒩) \iota_1:\Omega^p(\mathcal{M})\to\Omega^{(p,0)}(\mathcal{M}\times\mathcal{N})

and

(8)ι 2:Ω q(𝒩)Ω (0,q)(×𝒩) \iota_2:\Omega^q(\mathcal{N})\to\Omega^{(0,q)}(\mathcal{M}\times\mathcal{N})

via

(9)ι 1(α)=α1 \iota_1(\alpha) = \alpha\otimes 1

and

(10)ι 2(β)=1β \iota_2(\beta) = 1\otimes\beta

so that

(11)ι 1(α)ι 2(β)=αβ \iota_1(\alpha)\wedge\iota_2(\beta) = \alpha\otimes\beta

and we define

(12)(α 1β 1)(α 2β 2)=(α 1α 2)(β 1β 2). (\alpha_1\otimes\beta_1)\wedge(\alpha_2\otimes\beta_2) = (\alpha_1\wedge\alpha_2)\otimes(\beta_1\wedge\beta_2).

Perhaps \otimes is not the best symbol to use for bookkeeping purposes, but I hope you see what I am getting at. Come to think of it, perhaps \otimes is what we want because we would like to be able to move 0-forms across it (I think).

The point is that given a pp-form αΩ p()\alpha\in\Omega^p(\mathcal{M}) and a (p,p)(p,p)-form GΩ (p,p)(×𝒩)G\in\Omega^{(p,p)}(\mathcal{M}\times\mathcal{N}), we obtain a pp-form αΩ p(𝒩)\alpha'\in\Omega^p(\mathcal{N}) via

(13)α= Gα. \alpha' = \int_{\mathcal{M}} G\wedge\star\alpha.

This last expression is the important one and the one we want. Everything preceeding that is me trying to fill in the blanks. No guarantees on how well I did :) Oh yeah, the case we are interested in, of course, is =𝒩\mathcal{M}=\mathcal{N}.

The whole point is that I am trying to figure out if

(14)dx (μ,σ)dx (ν,σ) dx^{(\mu,\sigma)}\otimes dx^{(\nu,\sigma')}

for σσ\sigma\ne\sigma', should be thought of as an element of

(15)Ω 1(ℒℳ)Ω 1(ℒℳ) \Omega^1(\mathcal{LM})\otimes\Omega^1(\mathcal{LM})

or

(16)Ω (1,1)(ℒℳ×ℒℳ). \Omega^{(1,1)}(\mathcal{LM}\times\mathcal{LM}).

I’m pretty sure you will tell me it is the former, but I don’t see why it isn’t the latter.

Eric

Posted by: Eric on August 16, 2004 8:59 PM | Permalink | Reply to this

Re: Sigma

Hi Eric -

I am somewhat reluctant to discuss ‘multi-forms’ until we manage to agree on the simple issue of ordinary forms. I think the elementary question at hand does not require any such notions.

Let me again emphasize the simple point under discussion:

You agreed that dx (μ,σ)dx^{(\mu,\sigma)} is a 1-form on loop space. The fact that the symbol σ\sigma appears here has nothing to do with the ‘multi-forms’ and Greens functions that you mentioned. It is just a label identifying this particular form. Don’t think in terms of form on target space. We are talking about forms on loop space. Maybe it helps if we just write α\alpha for it. Then pick any other 1-form β\beta on loop space.

Now you can wedge these together to obtain αβ\alpha \wedge \beta (which, by the way, is =αββα = \alpha \otimes \beta - \beta \otimes \alpha, at least in the ordinary sense in which I am using the tensor product here).

The simple fact is that αβ=0\alpha\wedge \beta = 0 if and only if αβ\alpha \propto \beta. But you want to introduce a rule which makes αβ\alpha \wedge \beta vanish even when they are not linearly dependent. Any such rule breaks the ordinary notion of forms and wedge products.

Of course one may make up new definitions as one desires, but please let us first agree on the ordinary notions.

Posted by: Urs Schreiber on August 17, 2004 11:09 AM | Permalink | PGP Sig | Reply to this

Re: Sigma

Hi Urs,

Sorry if I seem to be going off on tangents (no pun intended :)). The thing is that in my attempt to understand loop space differential geometry, I started writing up those notes. I actually understand my notes :) This means that I am only starting to understand loop space differential geometry.

The simple fact is that αβ=0\alpha\wedge\beta=0 if and only if αβ\alpha\propto\beta. But you want to introduce a rule which makes αβ\alpha\wedge\beta vanish even when they are not linearly dependent. Any such rule breaks the ordinary notion of forms and wedge products.

There is another case where αβ=0\alpha\wedge\beta = 0 even when α\alpha is not proportional to β\beta. That is when the support of α\alpha and the support of β\beta do not overlap.

In the case we are talking about,

(1)dx (μ,σ)| γanddx (ν,σ)| γ \left. dx^{(\mu,\sigma)}\right|_\gamma \quad\text{and}\quad \left. dx^{(\nu,\sigma')}\right|_\gamma

for σσ\sigma\ne\sigma' correspond to cotangent vectors in target space in completely different cotangent spaces. Hence, even though it may turn out to be wrong, I don’t think it is so outlandish to ask the question.

The following is a question I’ve been saving for a rainy day. Even though it is sunny outside it may be relevant here. First, “What is a 0-form on loop space?”

After writing my notes, it became clear to me that a 0-form on loop space corresponds to a loop 0-form on target space, i.e. a 0-form defined along the image of loops in target space. Evaluating the 0-form on loop space at a point γ\gamma corresponds to integrating the loop 0-form along the loop in target space via the parameter measure dσd\sigma, a process I called tr\tr, i.e.

(2)ϕ| γ=tr(ϕ| L(γ))= 0 2πϕ| γ(σ)dσ. \left.\phi\right|_\gamma = \tr\left(\left.\phi\right|_{L(\gamma)}\right) = \int_0^{2\pi} \left.\phi\right|_{\gamma(\sigma)} d\sigma.

Ah ha! :) I shouldn’t have saved this question because I think it will help me understand things :)

The main question is, “How do we wedge two 0-forms together?” Taking a cue from standard geometry, we could write down

(3)(ϕψ)| γ=ϕ| γψ| γ, \left. (\phi\wedge\psi)\right|_\gamma = \left.\phi\right|_\gamma \left.\psi\right|_\gamma,

where the multiplication on the right is done in IR\IR. If this is correct, then we have

(4)(ϕψ)| γ= 0 2π 0 2π[ϕ| γ(σ)ψ| γ(σ)]dσdσ \left. (\phi\wedge\psi)\right|_\gamma = \int_0^{2\pi} \int_0^{2\pi} \left[ \left.\phi\right|_{\gamma(\sigma)} \left.\psi\right|_{\gamma(\sigma')} \right] d\sigma d\sigma'

and we have a similar “problem.” I would have been inclined to write

(5)(ϕψ)| γ= 0 2π[ϕ| γ(σ)ψ| γ(σ)]dσ, \left. (\phi\wedge\psi)\right|_\gamma = \int_0^{2\pi} \left[ \left.\phi\right|_{\gamma(\sigma)} \left.\psi\right|_{\gamma(\sigma)} \right] d\sigma,

i.e. you take values at corresponding σ\sigma’s in target space.

Of course if

(6)ϕ| γ(σ)ψ| γ(σ)=δ(σ,σ)ϕ| γ(σ)ψ| γ(σ) \left.\phi\right|_{\gamma(\sigma)} \left.\psi\right|_{\gamma(\sigma')} = \delta(\sigma,\sigma') \left.\phi\right|_{\gamma(\sigma)} \left.\psi\right|_{\gamma(\sigma)}

then the two (would seem to) agree and we are back to square one :) At least getting a handle on this would seem to be slightly easier :)

It now seems to me to be a matter of order of operations. Do we multiply first and then integrate or integrate then multiply? The operations clearly do not commute. I think this gets to the heart of my concerns. What do you think? In standard geometry, where the wedge product above took its cue, the problem does not appear because multiplying and integration DO commute on “point space”, whereas they do not commute on loop space.

Hmm, interesting :) It is a subtle, but I think important, point.

What do you think about the commutation of multiplying and integrating? Depending on the choice we make, we will get two different versions of loop space differential geometry. In my notes, I chose to multiply first and then integrate. In your papers, you apparently integrate first and then multiply.

I hope you agree that the fact that we have an “order of operations” issue giving rise to two distinct flavors of loop space differential geometry is at least slightly interesting :)

Gotta run!
Eric

Posted by: Eric on August 17, 2004 2:24 PM | Permalink | Reply to this

Re: Sigma

Hi Eric -

you wrote:

That is when the support of α\alpha and the support of β\beta do not overlap.

No, we don’t need to take that complication into account. Everything I am saying applies to one single cotangent space at a given point in loop space.

And yes, we could do the same discussion for 0-forms.

You ask what a 0-form on loop space is. It is just a function on loop space. The most important example of such a function which is not local in your sense is the Wilson line of some connection around the loop. Let AA be any connection on target space and let FF be the function on loop space which assigns

F(γ)=TrPexp( 0 2πdσA(γ(σ))γ (σ))F(\gamma) = \mathrm{Tr}\,\mathrm{P}\exp\left(\int_0^{2\pi}d\sigma\, A(\gamma(\sigma))\cdot \gamma^\prime(\sigma)\right). This is a 0-form on loop space and it involves products of 0-forms on target space at different points. This is not a bug, but a feature.

You ask how we multiply 0-forms. As we always do! By multiplying their values. We don’t change the rules of differential geometry at all. So (ϕψ)(γ)=ϕ(γ)ψ(γ)(\phi \wedge \psi)(\gamma) = \phi(\gamma)\psi(\gamma). This has nothing to do with loop space. This is just the general rule for doing differential geometry.

If you like to think of a generalization of this which you address as ‘order of operations issue’ that’s fine with me, but let’s first agree on the ordinary way to do differential geometry.

Please! :-)

Posted by: Urs Schreiber on August 17, 2004 4:46 PM | Permalink | PGP Sig | Reply to this

Re: Sigma

Hi Urs! :)

Thanks for your patience so far. I feel like as patient as you’ve been, I better say something intelligent quick or I will push you over the edge :)

You ask how we multiply 0-forms. As we always do! By multiplying their values. We don’t change the rules of differential geometry at all. So (ϕψ)(γ)=ϕ(γ)ψ(γ)(\phi\wedge\psi)(\gamma)=\phi(\gamma)\psi(\gamma) . This has nothing to do with loop space. This is just the general rule for doing differential geometry.

I have a feeling that what I am about to say is not very intelligent, but hopefully you can bear with me for just a little longer. Focusing on 0-forms, I’m sure we can straighten things out soon.

The rule

(1)(ϕψ)(p)=ϕ(p)ψ(p) (\phi\wedge\psi)(p) = \phi(p) \psi(p)

is absolutely unquestionable for 0-forms on “point space”, i.e. on standard manifolds where the points have no internal structure.

However, loop space is more subtle. When you say

let’s first agree on the ordinary way to do differential geometry

you are assuming that there is an ordinary way to do loop space differential geometry. If this were the case, then I don’t think Rajeev would have made the statement

The set of loops on space-time is an infnite dimensional space; calculus on such spaces is in its infancy. It is too early to have rigorous definitions of continuity and differentiablity of such functions. Indeed most of the work in that direction is of no value in actually solving problems of interest (rather than in showing that the solution exists.)

A point on loop space, unlike a point in a usual manifold, has some internal structure to it. This internal structure can potentially lead to some unordinary behavior. In particular, it can potentially lead to an “order of operations” ambiguity. This ambiguity does not appear in ordinary differential geometry because there the points have no internal structure.

To help explain what I mean, consider two ordinary 0-forms on some ordinary manifold whose points have no internal structure. In this case, we can multiply first and then integrate giving

(2) pϕψ=(ϕψ)(p). \int_p \phi\wedge\psi = (\phi\wedge\psi)(p).

Or we could integrate first and then multiply giving

(3) pϕ pψ=ϕ(p)ψ(p). \int_p \phi \int_p \psi = \phi(p) \psi(p).

Since

(4)(ϕψ)(p)=ϕ(p)ψ(p), (\phi\wedge\psi)(p) = \phi(p) \psi(p),

it doesn’t matter in which order we perform the operations.

However, on loop space, the points have some internal structure so evaluating a 0-form on loop space involves an extra integral over this internal structure, i.e.

(5) γϕ=12π 0 2πϕ(σ)| γdσ, \int_\gamma \phi = \frac{1}{2\pi} \int_0^{2\pi} \left.\phi(\sigma)\right|_\gamma d\sigma,

where the normalization is chosen so that 1| γ=1\left. 1\right|_\gamma = 1. Because of this internal structure, now it does matter in which order you perform the operations. For instance, if we multiply first and then integrate, we get

(6) γϕψ=12π 0 2πϕ(σ)| γψ(σ)| γdσ. \int_\gamma \phi\wedge\psi = \frac{1}{2\pi} \int_0^{2\pi} \left.\phi(\sigma)\right|_{\gamma} \left.\psi(\sigma)\right|_{\gamma} d\sigma.

On the other hand, if we integrate first and then multiply, we get

(7) γϕ γψ=12π12π 0 2π 0 2πϕ(σ)| γψ(σ)| γdσdσ. \int_\gamma \phi \int_\gamma \psi = \frac{1}{2\pi} \frac{1}{2\pi} \int_0^{2\pi} \int_0^{2\pi} \left.\phi(\sigma)\right|_{\gamma} \left.\psi(\sigma')\right|_{\gamma} d\sigma d\sigma'.

Therefore, due to the internal structure of points on loop space, we have

(8) γϕψ γϕ γψ. \int_\gamma \phi\wedge\psi\ne \int_\gamma \phi \int_\gamma \psi.

This might seem unordinary, but maybe loop space differential geometry is supposed to be unordinary :)

Although it should always go without saying, I am still not convinced one way or the other and am just trying to explore the various ways to define these basic operations. The internal structure of points on loop space seems to add a subtlety that might make these discussions worth while.

Eric

Posted by: Eric on August 17, 2004 6:31 PM | Permalink | Reply to this

Re: Sigma

Hi Eric -

don’t worry, I am ready to discuss this indefinitely. Maybe I found it irritating that it seemed to me that you were fighting against the obvious. But let’s sort that out - either way! :-)

Yes, we have to be careful on loop space, due to its infinite dimensionality there can be divergences in naive expressions and means must be taken to circumvent these. But our discussion is not at that point yet. The ambiguity that you are worried about is not a problem of this sort but - if I may say that - a confusion in your notation! :-)

In my opinion your emphasis on target space geometry while we are really talking about loop space geometry gets in the way of some simple insights.

A point on loop space does not have substructure. No point does.

In what you think of as a counterexample, where you write

(1) γϕψ=12π 0 2πϕ γ(σ)ψ γ(σ)dσ\int_\gamma \phi \wedge \psi = \frac{1}{2\pi} \int_0^{2\pi} \phi_\gamma(\sigma)\psi_\gamma(\sigma) d\sigma

you are implicitly using the wegde product on target space on the left. At the point we are discussing this is extra structure which I would really urge you to forget for a moment. When you think about it, this does not follow the general scheme that you mentioned more at the beginning of the comment. There is no reason for this to agree with the expression γϕ γψ\int_\gamma \phi \int_\gamma \psi, which is the correct expression.

Moreover, by far not every function on loop space is of the form that you are assuming here. I.e. not every FF is of the form F(γ)= γϕ(σ)dσF(\gamma) = \int_\gamma \phi(\sigma) \, d\sigma. Just take the Wilson loop I mentioned as an example.

Posted by: Urs Schreiber on August 17, 2004 7:13 PM | Permalink | PGP Sig | Reply to this

Re: Sigma

Hi again Eric -

last night I kept thinking about how I could convince you concerning our discussion. It occurred to me that it might help to again look at n-gon space.

In particular let’s look at “triangle space” Triangle{Triangle}\mathcal{M}, the space of 3-tuples of points in target space \mathcal{M}, which should be thought of as indexed by a discrete σ\sigma taking values 0, 2pi/3 and 4pi/3.

This space already features all the issues that we are talking about and it finite dimensional, so that we can safely ignore problems with divergencies that occur in loop space and hence convince ourself that these play no role for the identification of the differential geoimetry over loop space.

Let me for simplicity just assume that target space is R 2{R}^2. Then the first important thing is that Triangle{Triangle}\mathcal{M} is just the same as R 6{R}^6.

This makes my point that we should not think about points in Triangle{Triangle}\mathcal{M} having ‘substructure’ in the sense you used this notion in your comment.

Namely differential geometry on Triangle=R 6{Triangle}\mathcal{M} = {R}^6 does not depend on our interpretation of a point in R 6{R}^6 as describing a triangle in R 2{R}^2. Given two functions ϕ\phi and ψ\psi on Triangle=R 6{Triangle}\mathcal{M} = {R}^6 their product is obviously

(1)(ϕψ)(x)=ϕ(x)ψ(x). (\phi \psi)(x) = \phi(x) \psi(x) \,.

Next consider 1-forms on Triangle=R 6{Triangle}\mathcal{M} = {R}^6. If the coordinates on =R 2\mathcal{M}={R}^2 are labeled x 1x^1 and x 2x^2 it is convenient to label the coordinates of Triangle=R 6{Triangle}\mathcal{M} = {R}^6 as

x (1,0)x^{(1,0)}, x (1,23π)x^{(1,\frac{2}{3}\pi)}, x (1,43π)x^{(1,\frac{4}{3}\pi)}, x (2,0)x^{(2,0)}, x (2,23π)x^{(2,\frac{2}{3}\pi)}, x (2,43π)x^{(2,\frac{4}{3}\pi)}

We could just as well label them x 1x^1, x 2x^2, x 3x^3, x 4x^4, x 5x^5, x 6x^6, but the above labeling reminds us about how we want to interpret the points in Triangle=R 6{Triangle}\mathcal{M} = {R}^6. But let’s use both notations equivalently

x 1=x (1,0)x^1 = x^{(1,0)}, x 2=x (1,23π)x^2 = x^{(1,\frac{2}{3}\pi)}, x 3=x (1,43π)x^3 = x^{(1,\frac{4}{3}\pi)}, x 4=x (2,0)x^4 = x^{(2,0)}, x 5=x (2,23π)x^5 = x^{(2,\frac{2}{3}\pi)}, x 6=x (2,43π)x^6 = x^{(2,\frac{4}{3}\pi)}

In these coordinates, the basis 1-forms are of course

dx 1=dx (1,0)dx^1 = dx^{(1,0)}, dx 2=dx (1,23π)dx^2 = dx^{(1,\frac{2}{3}\pi)}, dx 3=dx (1,43π)dx^3 = dx^{(1,\frac{4}{3}\pi)}, dx 4=dx (2,0)dx^4 = dx^{(2,0)}, dx 5=dx (2,23π)dx^5 = dx^{(2,\frac{2}{3}\pi)}, dx 6=dx (2,43π)dx^6 = dx^{(2,\frac{4}{3}\pi)}

where again the equality signs just equate two notations.

And from these we obtain non-vanishing 2-forms like

(2)dx 1dx 2=dx (1,0)dx (1,23π). dx^1 \wedge dx^2 = dx^{(1,0)} \wedge dx^{(1,\frac{2}{3}\pi)} \,.

Note that just because we want to interpret points in R 6{R}^6 as describing triangles in R 2{R}^2 the differential geometry on R 6{R}^6 does not change. Still, in the interpretation of R 6{R}^6 as TriangleR 2{Triangle}{R}^2 the above ordinary 2-form on R 6{R}^6 has a ‘non-local interpretation’ in target space, in a sense. But this is not a problem and in fact inevitable and good. Given any triangle in R 2{R}^2 one can certainly consider changes of the x 1x^1 coordinate of its vertex labeled by σ=0\sigma=0 together with changes of the x 1x^1 coordinate of its vertex labeled by σ=2π/3\sigma=2\pi/3. This gives the above 2-form.

Similarly for 0-forms on Triangle{Triangle}\mathcal{M}. Consider the function FF on Triangle=R 6{Triangle}\mathcal{M} = {R}^6 which maps any triangle on R 2R^2 to its circumference and consider the function GG which maps a triangle to the length of its longest edge. Both FF and GG are ordinary functions on Triangle=R 6{Triangle}\mathcal{M} = {R}^6 and their product FGFG is certainly

(3)(FG)(x)=F(x)G(x) (FG)(x) = F(x) G(x) \,

which, when evaluated on any point xx of Triangle=R 6{Triangle}\mathcal{M} = {R}^6 returns the product of the circumference of the given triangle with the length or its longest edge.

Furthermore note that the functions FF and GG do not come from summing 0-forms on target space over the vertices of the triangle.

The functions which you considered as 0-form on loop space in your latest comment would have triangle space analogs HH of the form

(4)H(x)= i=1 6h ix i H(x) = \sum_{i=1}^6 h_i x^i

with {h i}\{h_i\} a set of constants. Clearly, this is only a very small subset of all functions on Triangle=R 6{Triangle}\mathcal{M} = {R}^6.

I hope that these example at least clarify my point. I would suggest that we first try to agree on the differential geometry of triangle space before continuing the dicussion of loop space.

Posted by: Urs Schreiber on August 18, 2004 10:57 AM | Permalink | PGP Sig | Reply to this

Triangle Space

Hi Urs,

Thanks for putting up with these questions. Believe it or not, none of this stuff is obvious to me and I’m not simply trying to cause trouble :) On the bright side, if I am struggling with this, I can pretty much guarantee you that I’m not alone. Of course there will be anomalies of incredibly bright students, like yourself, for which this stuff is obvious, but I’m willing to bet that for a majority of students, it is far from obvious.

Work is distracting me at the moment (or should it be the other way around? :)), but I think your idea of working with triangle space is a good one.

More later…

Gotta run,
Eric

Posted by: Eric on August 18, 2004 6:57 PM | Permalink | Reply to this

Re: Triangle Space

Hi Eric -

I am glad that you replied, I was getting worried that I might have annoyed you.

Yes, let’s consider triangle space and maybe nn-gon space for arbitrary nn. The major part of the interesting stuff that I am looking forward to discuss can be discussed on nn-gon space just as well as on loop space. Once the concept of nn-gon space is familiar loop space is just an afterthought - and we can then concentrate on the subtleties it brings with it.

Maybe one further comment that might help familiarize with these notions:

Obviously nn-gon space is nothing but the configuration space of nn distinguishable particles. From that point of view it may not seem all that exotic. A point in nn-gon space is a configuration of a physical system consisting of nn distinguishable particles. A vector on nn-gon space hence is associated with an infinitesimal shift in the configuration of these particles, and so on.

Posted by: Urs Schreiber on August 18, 2004 7:26 PM | Permalink | PGP Sig | Reply to this

Re: Triangle Space

Hi Urs :)

I am glad that you replied, I was getting worried that I might have annoyed you.

Hah! “You” annoy “me”? I’m more worried of the other way around :) Occasionally, one or two people in my life have tried hard enough to actually annoy me, but you are unlikely to ever achieve that fete :)

Yes, let’s consider triangle space and maybe nn-gon space for arbitrary nn. The major part of the interesting stuff that I am looking forward to discuss can be discussed on nn-gon space just as well as on loop space. Once the concept of nn-gon space is familiar loop space is just an afterthought - and we can then concentrate on the subtleties it brings with it.

Sounds great to me :)

The first troubling thing for me is now this issue of reparameterization invariance. For nn-gon space on IR m\IR^m, there are 2n2n (isolated) points in IR mn\IR^{mn} that correspond to the same nn-gon in target space. This comes from nn cyclic permutations in the one direction and nn cyclic permutations in the other. For example,

(1)(x (1,0),x (1,23π),x (1,43π),x (2,0),x (2,23π),x (2,43π)) (x^{(1,0)},x^{(1,\frac{2}{3}\pi)},x^{(1,\frac{4}{3}\pi)},x^{(2,0)},x^{(2,\frac{2}{3}\pi)},x^{(2,\frac{4}{3}\pi)})
(2)(x (1,23π),x (1,43π),x (1,0),x (2,23π),x (2,43π),x (2,0)) (x^{(1,\frac{2}{3}\pi)},x^{(1,\frac{4}{3}\pi)},x^{(1,0)},x^{(2,\frac{2}{3}\pi)},x^{(2,\frac{4}{3}\pi)},x^{(2,0)})

and

(3)(x (1,43π),x (1,0),x (1,23π),x (2,43π),x (2,0),x (2,23π)) (x^{(1,\frac{4}{3}\pi)},x^{(1,0)},x^{(1,\frac{2}{3}\pi)},x^{(2,\frac{4}{3}\pi)},x^{(2,0)},x^{(2,\frac{2}{3}\pi)})

all correspond to the same triangle in IR 2\IR^2. It’s going to take some work before I’m even confortable with nn-gon space :)

Gotta run!
Eric

Posted by: Eric on August 18, 2004 10:38 PM | Permalink | Reply to this

Re: Triangle Space

Hi Eric -

all correspond to the same triangle in IR 2{IR}^2

True. But this true fact should not keep you from getting to the interesting stuff.

If you prefer, call the space the ‘labeled triangle space’ which consists of triangles with distinguishable vertices. The space of unlabeled triangles is a subspace of this space and that’s fine.

Posted by: Urs Schreiber on August 19, 2004 1:57 AM | Permalink | PGP Sig | Reply to this

Re: Triangle Space

Hi Urs,

all correspond to the same triangle in IR 2\IR^2

True. But this true fact should not keep you from getting to the interesting stuff.

Right. I pretty much understand that this is the same concept as what I called “equivalent loops,” i.e. points in loop space with the same image in target space differing only by parameterization, so it doesn’t bother me. But what is different here is that “equivalent nn-gons” are discrete isolated points in nn-gon space, where we had a continuum loop subspace before. This subspace of equivalent loops allowed us to define equivalent vector fields corresponding to loop vector fields tangent to the loop, i.e. your KK vector fields.

This means that we cannot define this reparameterization vector field KK on nn-gon space because we cannot define curves on these discrete points. I am tempted to suggest considering nn-gon space over a discrete manifold, e.g. nn-diamond, as in our notes :)

That might make an interesting follow up paper ;)

Another thing about this redundancy of equivalent nn-gons is the 2n2n-fold symmetry under cyclic permutations of the coordinates. That wasn’t quite obvious to me before. This translates to an \infty-fold symmetry on loop space (at least for loop space over IR m\IR^m).

With that said, I agree that we can still learn a lot about loop space by studying nn-gon space. At the same time, we should understand the differences, e.g. no KK vectors.

More later, but goodnight for now,
Eric

Posted by: Eric on August 19, 2004 6:11 AM | Permalink | Reply to this

Re: Triangle Space

Hi Eric -

yup, there is no continuous rep invariance on nn-gons. That’s what we are loosing in this approximation.

BTW, the cyclic permutation symmetry is nn-fold on oriented nn-gon space and 2n2n-fold only on non-oriented nn-gon space.

Are you planning to include a discussion of the differential geometry of nn-gon space in your notes?

Posted by: Urs Schreiber on August 19, 2004 11:06 AM | Permalink | PGP Sig | Reply to this