How do I convert a bitmap to vector?

How does multiplying a 4-vector with the Minkowski metric in special relativity convert it from contravariant vector to a covariant vector?

  • A contravariant vector is something whose components are defined to change according to [math]\bar{A}^\alpha=\frac{\partial\bar{x}^\alpha}{\partial {x^j}}A^j[/math] upon change of coordinate system while those of a covariant vector transforms according to [math]\bar{A}_\alpha=\frac{\partial{x^j}}{\partial \bar{x}^\alpha}A_j[/math]. How does multiplying with a matrix (metric in this case) bring about a change in the transformation rules of a 4-vector?

  • Answer:

    This is actually a very general result applicable far beyond the case of the Minkowski metric and Lorentz transformations. So, in order to make the essential meaning behind 'lowering an index' clear, I'll work in a rather general setting. We start with a (finite dimensional) vector space [math]V[/math] over a certain field, let us take which to be [math]\mathbb R[/math]. The space of linear maps from [math]V[/math] to [math]\mathbb R[/math] is a vector space as well (since linear combinations of linear maps are linear as well). This space [math]V^\star[/math] is said to be dual to [math]V[/math], and is what a condensed matter physicist would refer to as reciprocal vector space. Now, suppose that you have a linear map [math]\phi:V\rightarrow W[/math], where [math]V[/math] and [math]W[/math] are vector spaces. This canonically induces a map [math]\phi^t:W^\star\rightarrow V^\star[/math] between the respective dual spaces in the opposite direction. Here is how. Say you start with some linear map [math]g:W\rightarrow\mathbb R[/math] which by definition is an element of [math]W^\star[/math]. This map [math]g[/math] sends vectors [math]w\in W[/math] to real numbers. Now, let's say we have a vector [math]v\in V[/math]. We can use [math]\phi[/math] to take it to a vector [math]w=\phi(v)\in W[/math] and then compose it with [math]g[/math] to get a map [math]f:V\rightarrow\mathbb R[/math] given by [math]f=g\circ\phi[/math] (compositions are to be read right to left). In other words, we can use [math]\phi[/math] to send elements in [math]W^\star[/math] to elements in [math]V^\star[/math]. This assignment is precisely the transpose map [math]\phi^t[/math]. As an exercise, you may wish to see that if you take the vectors in [math]V[/math] to be column vectors and those in [math]V^\star[/math] to be left multiplication by row vectors, the notion of transpose introduced here is equivalent to the notion of transpose familiar from matrices. Next, we consider the question of what the dimension of [math]V^\star[/math] is. A little thought ought to convince you that it is the same as [math]V[/math]. Here's the basic idea. You start with a basis [math]v_i[/math] of [math]V[/math] and consider the maps [math]f^j\in V^\star[/math] such that [math]f^j(v_i)=\delta^j_i[/math]. So, [math]f^1[/math] sends [math]v_1[/math] to [math]1[/math] and everything else to [math]0[/math], [math]f^2[/math] sends [math]v_2[/math] to [math]1[/math] and everything else to [math]0[/math], and so on. Quite evidently, these [math]f^j[/math] form a basis for [math]V^\star[/math] and it follows at once that [math]\mathrm{dim}\,V=\mathrm{dim}\,V^\star[/math]. We know that two vector spaces of the same dimension are isomorphic, and indeed [math]V[/math] and [math]V^\star[/math] are isomorphic. However, they are not canonically isomorphic meaning that there is no natural isomorphism to single out amidst the plethora of isomorphisms available. But what about the prescription we outlined in the previous paragraph that took basis elements in [math]V[/math] to basis elements in [math]V^\star[/math]? Surely, that seems like a natural choice to make. Actually, no. That choice is sensitive to the choice of basis that we make. To see this, note that if we had chosen a different basis given by [math]\phi(v_i)[/math] where [math]\phi[/math] is an invertible linear map from [math]V[/math] to itself i.e. an automorphism, the corresponding basis in [math]V^\star[/math] would be [math](\phi^t)^{-1}(f^j)[/math] as opposed to [math]\phi(f^j)[/math] (*) (just repeat the arguments in the third paragraph with [math]V=W[/math]). In essence, the two vector spaces [math]V[/math] and [math]V^\star[/math] are not canonically isomorphic because they transform differently. However, there is a way of making them canonically isomorphic by adding more structure. Namely, we introduce an inner product [math]\langle\cdot,\cdot\rangle:V\times V\rightarrow \mathbb R[/math]. Now, given a vector [math]v\in V[/math], we can canonically choose the map [math]f=\langle v,\cdot\rangle\in V^\star[/math]. Conversely, if we have a canonical isomorphism [math]\flat:V\rightarrow V^\star[/math], the inner product [math]\langle v, v'\rangle[/math] may simply be defined as [math]v^\flat(v')[/math] (the map [math]\flat[/math] and its inverse [math]\sharp[/math] are called musical isomorphisms and are typically written as superscripts so that [math]v^\flat:=\flat(v)[/math] and [math]f^\sharp =\sharp(f)[/math]). The two notions are therefore completely equivalent. Using the inner product to map a vector in [math]V[/math] to a dual vector in [math]V^\star[/math] is what you are essentially doing when you 'multiply the vector by the metric'. Since [math](V^\star)^\star[/math] is canonically isomorphic to [math]V[/math] without the need for any extra structure such as inner products (show this!), you could call the elements of either [math]V[/math] or [math]V^\star[/math] as contravariant and the other covariant (take care about the fact the metric matrix for one is the inverse of that for the other). Physicists usually refer to tangent vectors on a manifold as contravariant and their duals, the cotangent vectors, as covariant. This choice of terminology is actually quite curious if you are familiar with category theory, wherein functors that preserve the directions of morphisms such as the tangent space functor (defined on smooth manifolds with a distinguished point) are referred to as covariant, and functors that reverse the directions of morphisms such as the cotangent space functor (again defined on smooth manifolds with a distinguished point) are referred to as contravariant. But what's in a name? (*) Strictly speaking, [math]\phi(f^j)[/math] doesn't even make sense since the domain of [math]\phi[/math] is [math]V[/math], not [math]V^\star[/math]. But what I mean is the following. Say for example you have a basis [math]\{v_1,v_2\}[/math] of [math]V[/math], and you have [math]\phi(v_1)=av_1 + bv_2[/math] and [math]\phi(v_2)=cv_1 + dv_2[/math]. The new basis for [math]V^\star[/math] is not given by [math]af^1 + bf^2[/math] and [math]cf^1 + df^2[/math], but [math](\phi^t)^{-1}(f^1)=\frac{df^1-cf^2}{ad-bc},[/math]      [math](\phi^t)^{-1}(f^2)=\frac{-bf^1+af^2}{ad-bc}.[/math]

Arpan Saha at Quora Visit the source

Was this solution helpful to you?

Other answers

Thanks for the A2A. This can be understood best in terms of matrices. A contravariant vector (also called 4-vector) [math]b[/math] is defined to transform under a Lorentz transformation as [math]b \to \Lambda b[/math] where [math]\Lambda[/math] satisfies [math]\Lambda^T \eta \Lambda = \eta \implies \eta \Lambda = \left( \Lambda^{-1} \right)^T \eta .......... (1)[/math] A covariant vector [math]a[/math] is defined to transform as [math]a \to \left( \Lambda^{-1} \right)^T  a[/math] This definition is used so that the quantity [math]a^T b[/math] (known as the inner product) is Lorentz invariant. We can check this [math]a^T b \to \left( \left( \Lambda^{-1} \right)^T  a \right)^T \left( \Lambda b \right) = a^T \Lambda^{-1} \Lambda b = a^T b[/math] as required. Now, to answer your question which is - If [math]b[/math] is a contravariant vector then why is [math]\eta b[/math] a covariant vector? To answer this question, let us consider the Lorentz transformation of [math]\eta b[/math]. This is [math]\eta b \to \eta \left( \Lambda b \right) = \left( \Lambda^{-1} \right)^T \left( \eta b \right)[/math] where we have used property (1) of [math]\Lambda[/math]. Thus, we see that [math]\eta b[/math] transforms like a covariant vector. Since this is precisely the definition of a covariant vector, we conclude that [math]\eta b[/math] is a covariant vector. QED.

Prahar Mitra

The special role played by the metric here is that it provides an identification of a vector space [math]V[/math] with its dual space [math]V^*[/math] of one-forms or cotangent vectors. This is because the metric provides a natural way to multiply two tangent vectors, via the induced inner product or dot product, whereas without the metric one only knows how to evaluate a cotangent vector as a linear functional on a tangent vector.In other words, given a tangent vector [math]v[/math], the curried function [math]\langle v, \cdot \rangle[/math] is now a linear functional on tangent vectors, i.e., a cotangent vector. At the same time, if [math]f[/math] is a cotangent vector, then there exists a tangent vector [math]v[/math] such that [math]f(w) = \langle v, w \rangle[/math].So how precisely is this related to multiplication by the metric? It's quite simple. Given a tangent vector [math]v^i[/math], the contraction [math]g_{ij}v^j[/math] is evidently a tensor with just a single lower index -- a cotangent vector. We often write this cotangent vector using the same symbol that we used for the original tangent vector -- in this case as [math]v_i[/math]. So multiplication with the metric has given us a cotangent vector from a tangent vector, which we can then use to compute an inner product with another tangent vector, as above.To perhaps make this more explicit, consider that we can define an inner product between vectors [math]v^j[/math] and [math]w^i[/math] via [math]g_{ij}v^j w^i[/math]. This reduces to the familiar expressions when using the Euclidean or Minkowski metrics, since many of the entries of the matrix form of the metric are zero. But now we find that [math]g_{ij}v^j w^i = v_i w^i[/math] -- in other words, the cotangent vector [math]v_i[/math] that we obtained via multiplication by the metric is now playing the role of the curried inner product function mentioned above.This is a special case of the more general operation of lowering an index on a tensor. This phrase refers to exactly what we did above: contracting with the metric tensor to obtain a new tensor where one of the upper indices now appears as a lower index. It's conventional to always use the same base symbol for the new tensor. One can also raise an index by multiplying by the inverse metric [math]g^{ij}[/math].

Zach Conn

Related Q & A:

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.