The largest US mobile carrier, Verizon Wireless has started funneling traffic between Verizon feature phones and the web through a transcoder from Novarra. Verizon calls the service “Optimized View” and has added promotional information and a FAQ to their customer website There is also a page on the carrier’s developer site which has links to a opt-out form and to a PDF document detailing the rules that the transcoder uses to determine which sites to transcode.
Given the disastrous effects that VodaFone’s UK roll-out of Novarra had on mobile sites and services, I’m rather apprehensive about this. In the PDF, Verizon/Novarra say that they won’t change the User Agent header or transcode sites that have submitted an opt-out request or have a URL corresponding to one of these patterns:
*.mobi, m.*, mobile.*, avantango.*, wap.*, iphone.*, <domain>/m/*, <domain>/mobile/*, pda.*, wireless.*, wml.*, xhtml.*, <domain>/m/, <domain>/gmm/, <domain>/portable
For sites that have not opted out or do not use one of the mobile URL patterns, the transcoder will change the User Agent and a number of other headers to ones mimicking a desktop browser. However the site will not be transcoded if it meets any of the following conditions:
- It uses one of the following mobile specific DTDs:
- <!DOCTYPE html PUBLIC “-//OMA//DTD XHTML Mobile 1.2//EN” “http://www.openmobilealliance.org/tech/DTD/xhtml-mobile12.dtd”>
- <!DOCTYPE html PUBLIC “-//WAPFORUM//DTD XHTML Mobile 1.1//EN”
“http://www.wapforum.org/DTD/xhtml-mobile11.dtd”> - <!DOCTYPE html PUBLIC “-//WAPFORUM//DTD XHTML Mobile 1.0//EN”
“http://www.wapforum.org/DTD/xhtml-mobile10.dtd”> - <!DOCTYPE html PUBLIC “-//W3C//DTD XHTML Basic 1.1//EN” “http://www.w3.org/TR/xhtml-basic/xhtml-basic11.dtd”>
- <!DOCTYPE html PUBLIC “-//W3C//DTD XHTML Basic 1.0//EN” “http://www.w3.org/TR/xhtml-basic/xhtml-basic10.dtd”>
- It uses one the following MIME types:
- Application/vnd.wap.xhtml+xml
- Text/vnd.wap.wml
- It has a self referencing link rel tag:
- <link rel=”alternate” media = “handheld” href=mymobilesite.com/>
- It sends a “Cache-Control: no-transform” header.
I don’t have a Verizon account but I did do some limited testing in a Verizon store. I used a Motorola V950 which runs the Openwave 6 browser. I mostly tested my own sites. The transcoder seems to follow the rules in the PDF. None of my sites that use a mobile URL pattern, Doctype or no-transform were transcoded. A couple of test pages I created as copies of mobile pages but with none of the Novarra trigger conditions where transcoded even though they were under 10KB.
I didn’t really test the quality of the transcoded pages but noticed that a two line navigation bar is added to the top and bottom of the page and pagination is used. I thought the transcoded version of howardforums.com was reasonably usable, but my test sites, which were under 10KB, were needlessly split into two pages.
According to Verizon, the transcoder does offer users the ability to view the original version of any transcoded page by clicking a “Turn off Optimization” link in the bottom navigation footer. As a user, I consider this a very desirable feature which all transcoders should implement.
Secure HTTPS sites are transcoded, except for “banking sites”. I wonder how Novarra identifies banks? Users are warned that their security may be compromised when visiting a non-banking secure site through the transcoder. I didn’t try any secure sites so I don’t know what this warning looks like.
Transcoders are the bane of mobile web developers. This one will probably not be as disruptive has the Vodafone UK one was because Novarra has learned some lessons from that debacle and is doing a better job of detecting mobile sites and also because the development community is more aware of the transcoding problem and how to work around it. There should be no impact on m. and .mobi sites and others with a recognized mobile URL pattern. Sites that use a mobile doctype and don’t do browser detection will also be unaffected. However, sites on .com domains that rely on the User Agent or other headers to optimize for specific handsets need to either get on the white-list or start checking the “X-Device” headers where the transcoder is putting the original values of the headers it’s modified as follows:
Original Header | Renamed Header |
User-Agent | X-Device-User-Agent |
Accept | X-Device-Accept |
Accept-Charset | X-Device-Accept-Charset |
Accept-Encoding | X-Device-Accept-Encoding |
Accept-Language | X-Device-Accept-Language |
The Verizon transcoder can be identified by the Via: header which it sends, “1.1 Novarra (Vision/7.3)
X-Mobile-Gateway: Novarra-Vision/7.3 (VZW; Server-Only)”
The Vodafone transcoder broke many content (ring tones, game, application) download sites. I don’t see that happening with this one because Verizon already blocks almost all off-portal downloads.
The problem with transcoders changing headers is that it forces millions of mobile sites around the world to change their code in order to keep the same functionality. It’s an annoyance for capable mobile web developers who somehow find out that another transcoder has been deployed somewhere in the world. The real problem is with sites that aren’t actively maintained or whose developers don’t actively follow the doings of the W3C or monitor the mobile development mailing lists, forums and blogs. Which, I suspect, is the majority of sites worldwide.
Verizon and Novarra claim that their transcoder partially follows the recommendations of the W3C Content Transformation Guidelines. However those guidelines are still a work in progress and the issue of whether transcoders should alter the User-Agent except in a very limited number of cases such as when a site returns no content when presented with a mobile browser User-Agent is still under discussion. It would be far better for the health of the mobile web if transcoders did not alter the User-Agent and other headers as recommended by the Rules for Responsible Reformatting: A Developer Manifesto a document that evolved from discussions on the wmlprograming Yahoo Group and which has been signed and adopted by several transcoder vendors, but not by Novarra.
Related Posts:
Vodafone’s Heavy-Handed Transcoder
How Web to Mobile Transcoding Should Work
Opting Out of Transcoding
OpenWeb and InfoGin Adopt the Developer Manifesto!
Jenny, That award is for Novarra’s widget platform which is different from their transcoder.
The issues with Novara’s trnascoder as described in my post still exist.
It seems that Novarra has learned from their mistakes and has positive relations with all their clients now. Check out this press release just for this past July that gives Novarra a prestigious award for being innovative.
http://www.novarra.com/news/press-releases/novarra-awarded-%E2%80%9Cmost-innovative-carrier-infrastructurevas-platform%E2%80%9D/
It seems that Novarra has learned from their mistakes in the past and responded to any problems in an extremely timely manner. Read up on Novarra in the new below. The success of the company is tremendous.
just another industry comment unbelivable what Novarra tries to tell us>>>>>>>>
wait, am i getting this right ?? novarra takes out content (compresses) then injects branding (adds content) and adds targeted advertising, presumably using flash crap and javascript and pages load quicker on mobile, beeeeeeeeeeeeejeeeeeeeeeezus its a furkin miracle O_O
http://www.theregister.co.uk/2009/02/11/novarra_laptops/
The Novarra frust never ends:
http://www.seoprinciple.com/how-vodafone-and-novarra-killed-mobile-commerce/20/
WE DO NOT NEED TRANSCODERS
we tested Novarra and decided not to use them
WE DO NOT NEED TRANSCODERS
> I disagree with you over who, out of you and Rigo,
> is more qualified to give legal counsel.
you are mistaken. All legislations assume that there are moments when lawyers and judges listen to domain experts to understand the details of certain situations and choices. Even assuming that Rigo’s general framework stands (which is very open to discussion, IMO), the decision about whether “no-transform” is a good enough way to avoid transcoding must be based on reasoning that goes beyond the competence of a lawyer and must be handed over to “technicians”. And when it comes to technicians, while it’s true that experts may disagree, the reality is that the overwhelming majority thinks UA spoofing is abuse, and some of those technicians have already volunteered to testify against transcoders in court should this ever be needed.
If you want a feel of what content owners in great majority think of UA spoofing, just read the comments from each of the signers of the original Vodafone Rant:
http://wurfl.sourceforge.net/vodafonerant/
Luca
OK, that’s enough for me. I’ve discussed why I’ve not signed the Manifesto with you by private email before. I disagree with you over who, out of you and Rigo, is more qualified to give legal counsel.
> I told you clearly why I didn’t sign the Manifesto:
> because I object to its bellicose tone.
can you please tell me which sentences you find bellicose?
http://wurfl.sourceforge.net/manifesto/
anyway, if major transcoder vendors have signed it, it is probably not so bellicose as you say, don’t you think?
> It’s fine for you to disagree with Rigo’s opinion –
> but you’re not a lawyer, and therefore
> not qualified to do so.
very wrong. In fact, in that particular case, I believe I am more qualified than Rigo. A lawyer can say that there must be a way for websites to tell transcoders not to transcode them, but he cannot be more qualified than me in saying what those ways should be. I understand that no-transform has too much impact on existing apps and it is not reasonable to require that content owners change their applications. Rigo obviously does not understand this, since he is blindly trusting what his W3C colleagues told him.
Also, there is something I need to clarify here. I wouldn’t want you to misrepresent my position at a later stage.
Even before we discuss which ways are acceptable to tell transcoders not to transcode, my view is that the burden of recognizing mobile sites MUST sit squarely on transcoders. In other words, transcoders must err on the side of not transcoding, or they risk infringing on the copyright of content owners (particularly wrt mobile content).
On this last part, though, I am afraid we will need to wait for the lawyers of media companies to explain to courts around the planet why Rigo is wrong.
Luca
Luca – I told you clearly why I didn’t sign the Manifesto: because I object to its bellicose tone.
Support of either CTG or Manifesto is support for regulating transcoders, as is evident from the overlap in their content. That’s what both documents exist for.
It’s fine for you to disagree with Rigo’s opinion – but you’re not a lawyer, and therefore not qualified to do so. When he writes a post on the technical implementation of device capabilities databases that you disagree with, I’ll be inclined to support you ;)
> So the fact that I argue with you means that I am somehow
> serving the cause of transcoders?
when you come up with all kinds of arguments (rational and irrational) to defend transcoders and you continously go out of your way to find use cases that may legitimate transcoders against the interest of developers and content owners, then yes. You are serving the cause of transcoders.
Also, the Manifesto already represents an agreement which makes transcoder vendors and content owners happy. CTG is a step back. You should be pushing to make CTG as good as the Manifesto. You are pushing the other way.
> I’m not trying to defend transcoders:
> I want to see them regulated.
if this was true, you would have signed the Manifesto 6 months ago.
> I fail to understand how either of these sentences
> equate to my saying “copyright is diminished when
> you put things online”.
what you wrote and how I described it is under everyone’s scrutiny. Who reads us will decide whether I misrepresented what you wrote or not.
> Rigo gave his advice. Copyright stands on the web [..], and
> transcoding is legal. In the absence of any legal advice
> (from a lawyer) claiming that transcoding is illegal,
> I think the legal matter is settled for now.
I only said that transcoding is illegal when performed against the intentions of the content owner. I agree with Rigo when he writes that content owners have every right to demand that their content is not transcoded. I strongly disagree with Rigo when he claims that no-transform is a good enough way to communicate that a site should not be transcoded.
Luca
So the fact that I argue with you means that I am somehow serving the cause of transcoders? Weird. To me, it just means I disagree with you. I’m not trying to defend transcoders: I want to see them regulated.
Dissecting what I said:
> I’m not in a position to propose exceptions to copyright regulations
True.
> and the only legal opinion we’ve been able to get on the
> matter (from the W3C counsel) is that transcoding web content
> isn’t a breach of copyright.”.
True.
I fail to understand how either of these sentences equate to my saying “copyright is diminished when you put things online”.
But back to the point… Rigo gave his advice. Copyright stands on the web (I never claimed it didn’t), and transcoding is legal. In the absence of any legal advice (from a lawyer) claiming that transcoding is illegal, I think the legal matter is settled for now.
@Tom
> Luca, I’ve not said that “copyright is diminished
> when you put things online”, and you’ve misrepresented
> my position to Rigo when you claim so.
I really don’t think I did. First off, I did not tell Rigo anything like “Tom Hume says…”. I just explained how W3C’s position seemed to be understood by some and asked whether this was accurate.
Secondly, here is what *you* wrote in one of your comments (November 25):
———————————-
“I’m not in a position to propose exceptions to
copyright regulations, and the only legal opinion
we’ve been able to get on the matter (from the
W3C counsel) is that transcoding web content
isn’t a breach of copyright.”.
———————————-
To me this is enough to ask Rigo whether he also thinks that copyright is not infringed when transcoding.
Thirdly, I used my time to check what Rigo had said for real after you quoted him multiple times. You should be grateful for my serving the truth. Your nit-picking seems to indicate the opposite.
> Rigo is quite clear. He says that the existence of
> no-transform gives content providers a means to
> protect their content.
yes he does. And this part I strongly disagree with.
> He doesn’t mention the UA string: it’s you who asserts
> that service different content by UA is a breach of
> copyright, not Rigo.
I never said Rigo mentioned the UA. I just observed that his reasoning that the cache-control can be used to protect a site from transcoding can be taken a step further: if a transcoders spoofs the UA, they are effectively preventing a site from protecting its content from transcoding, since websites won’t even know that they need to send “no-transform” to protect against transcoding.
> Seeing as we’ve reached the seemingly inevitable
> point where you’re starting to throw abuse instead
> of reasoning politely (your comment on WMLP that
> “Tom is on some boat of his own licking operator ass”
> doesn’t seem like particularly enlightened debate to me),
> I’ll leave you to calm down a little.
Another blow under the belt. Why are you continuously cross-pollinating this thread, which everyone (except you) has been trying to keep as professional and facts-based as possible, with elements from a different context? look, I am not running for US presidency. I am not having an affair with Monica or anything like that. I just want to explain the world why transcoders are bad and what kind of dirty tricks they use to conquer legitimation where no legitimation is due. Anyway, I am very calm. The hyperbole I used on WMLprogramming was meant to express how I felt about you: you never acknowledged any of your wrong judgments wrt the whole transcoder issue and you exploit each and every chance to come to the rescue of transcoders and operators. This is not the attitude of a developer who feels damaged by transcoders. This is the reaction of someone way too busy to serve the cause of those who are pushing to change the rules of the game.
Luca
Luca, I’ve not said that “copyright is diminished when you put things online”, and you’ve misrepresented my position to Rigo when you claim so.
Rigo is quite clear. He says that the existence of no-transform gives content providers a means to protect their content. He doesn’t mention the UA string: it’s you who asserts that service different content by UA is a breach of copyright, not Rigo.
Seeing as we’ve reached the seemingly inevitable point where you’re starting to throw abuse instead of reasoning politely (your comment on WMLP that “Tom is on some boat of his own licking operator ass” doesn’t seem like particularly enlightened debate to me), I’ll leave you to calm down a little.
@Tom
Tom, it is so hard to discuss and follow a logical thread with you. You keep changing subject and bringing things back to areas which have already been cleared. Once more, I’ll need to bring it all back on track.
So, the discussion has lead to identify the point where transcoding can potentially harm the rights of content owners. Those rights are sacred, as everyone except you seems to agree about.
You had been arguing for a few days that the point of Rigo Wenning (W3C counsel) was that copyright was in some way diminished when content was published on the web. I checked with the sources and it turns up that it was not really the way you reported it. Quite the opposite, in fact. Mr Wenning confirmed that copyright infringement can happen on the Internet too (and if you think about it, I am not sure how you can consider this a surprise). He even went further as explicitly stating that transcoders may easily turn into offenders in this area (“transcoding proxy is producing a derivative work out of the content of the initial author and lacking rights to do so, would commit copyright infringement with all the consequences that come with it”).
One important aspect of Mr. Wenning’s answer is that transcoders may be considered similar to search engines. Since there is a possibility for content owners to restrict part of their content from search engines (robots.txt), content owners cannot complain if their content gets indexed or even cached by search engines. Because of this, Wenning concludes that transcoders are in a similar situation since a way to opt-out of transcoding exists: no-transform.
I am not 100% convinced that transcoders and search engine can be put on the same level. After all, search engines help the business model of content owners, while transcoder wants to get hold of third-party content for free. Anyway, for the sake of discussion, let me buy Mr. Wenning assumption and concentrate on the “opt-out” part of his reasoning:
> The Mobile Best Practices contain
> a technical feature that allows content providers to
> set a bit if they do not want their content to be
> transformed by the transforming proxy. This puts us
> into an analogy to robots.txt. A content provider
> that does not want his content to be transformed
> is able to reflect that very easily in this non-transform
> HTTP header.
I have a whole bunch of objections to this. Here they come, in no particolar order:
– I think I lost the part where W3C has the authority to give standards about regulating copyright. Using the same rule, any organization (or even any company) might declare that you need to place a header called “x-please-do-not-transcode” or whatever, and off they go abuse copyright simply because content owners have not done something which they technically could do to protect their copyright.
– robots.txt is downloaded by search engines before any other content is requested from web servers. This means that content owners do not need to change their applications to protect their content from search engines. This is very different from transcoders, which introduce themselves with already spoofed user-agent and removed UAProf header!
– the cache-control:no-transform header existed from before transcoders. W3C has hijacked it under the influence of Novarra. There is one problem with this, though. Adding this header will make WML applications stop working in many cases (WAP 1.X gateways do need to compile WML into binary format in order for WAP 1.X brower to work!). So, the W3C recommended way for content owners to protect their content is to force them to break existing apps!
– Rigo says that “no-transform” is a clear indication that the content owner does not want to be transcoded. This reasoning can be easily reversed though: the fact that the content owner is serving different content based on UA string is an even clearer indication that the content owner does not want to be transcoded! by spoofing the UA, a transcoder/operator has already infringed on the rights of a content owner because they have effectively prevented them from defending their copyright content!
In short, the more you look at this, the more transcoders appear like a bunch of far-west outlaws (and you look like someone who goes out of his way to protect the invisible rights of those outlaws)
Luca
@Tom
> Luca – out of interest, what phrase is Rigo commenting on?
> He seems to be referring to something you’d sent him
> for comment, I’d like to know what that is.
here is what I asked Rigo:
> I heard your name come up a few times with regards to an
> opinion that you are reported to have expressed about the
> legitimation for transcoders to modify and re-purpose internet
> content without the consent of the content owner. In short, your
> opinion is reported as “if you publish your content on the Web, you
> basically forfeit your copyright and others may legitimately do
> things without too much respect for copyright laws”.
>
> Because of this second-hand report of what W3C’s legal counsel
> said, discussion often reach a blind alley and no further
> advancement is possible.
>
> Would it be possible to know from you what your opinion about the
> subject matter is from the source?
Happy now?
Luca
Luca – out of interest, what phrase is Rigo commenting on? He seems to be referring to something you’d sent him for comment, I’d like to know what that is.
I think we’re both looking at the advice and seeing different things.
“A content provider that does not want his content to be transformed is able to reflect that very easily in this non-transform HTTP header. It is therefor very important for transcoding proxies to honor this feature. Ignoring this feature would mean that a transcoding proxy is producing a derivative work out of the content of the initial author and lacking rights to do so, would commit copyright infringement with all the consequences that come with it.”
Yes, proxies must respect this header: you and I both agree here.
Does this mean that transcoding is illegal? No, as Rigo says, because “this construct gives you an easy way out of the copyright trap by giving the content provider a means to express his wishes technically.”
So content providers should use the no-transform header to say “I don’t want my content transformed”, and proxies must respect this. In the absence of this header, it’s therefore reasonable for proxies to transcode.
I think this is consistent with what I’ve been saying: without a clear “do not transform” indication from the provider – which must be respected by proxies – content can be transformed.
Yep, (b) does contradict myself – apologies, error between seat and keyboard, as they say.
Transcoding is only misuse when it’s applied wrongly IMHO. I can conceive of situations where it’s better to have crappy access to transcoded web content, than no access to anything at all.
The problem isn’t transcoding per se (IMHO), it’s, as you say, enforced encoding: the heavy-handed and inconsiderate way in which it is sometimes rolled out, riding roughshod over the work of those of us building decent mobile experiences.
The only legal advice we have available to us makes sense to me: if you put stuff on the internet and don’t use the basic mechanisms that exist to protect it (“no-transform”), you should expect it to be cached, proxied or transcoded. Equally we should expect proxies to respect these protection mechanisms.
Transcoding *should* be opt-in, I agree. But if you opt into it, then guess what… that UA needs to be changed to allow it to be done. And if you’re opting in, I think that’d OK. @legalien gave us a use case for when a user might prefer to have a transcoded full-web experience over a mobile-specific experience. As long as it’s the user making this choice, and not a transcoder vendor or deployer – I reckon it’s OK.
@Tom
> the only legal opinion we’ve been able to get on the
> matter (from the W3C counsel) is that transcoding
> web content isn’t a breach of copyright.
I have an important update in this discussion. I took contact with the W3C counsel, Mr Rigo Wenning, and asked what views he expressed about copyright and transcoding during the W3C plenary, since some were quoting him as stating that copyright is diminished when you publish content on the web. Here comes his answer:
———————————————————
Dear M. Passani,
transcoders are a bit like search engines. They take content and
repurpose it. The legal questions involved in this are pretty
complex. I gave a presentation on the troubles of the mobile web
transcoding guidelines on W3C Technical Plenary. It is interesting to
see what finally remains in the brains of people from that talk.
I don’t think I said the phrase as reported below and I’m glad to
check back with me.
What I said was:
People putting content publicly visible on the internet have -by doing so- expressed their will to allow a most widespread distribution in the usual framework of the Web. This means that even without an explicit license, I can browse and read that content without committing an infringement. Search engines even stock that content.
This is just normal business on the Web and assumed to be tolerated
by content providers. The content provider can’t really attack the
search engine in court as there is a much easier way for the content
provider to solve the issue: robots.txt. It is sufficient to tell the
robot in robots.txt that the content provider does not want to have
their content indexed. Every court in this world will therefor refuse
to judge unless the content provider has put a robots.txt into place
and it was ignored by the robot.
In Mandelieu, I drew an analogy to this scenario for transcoding Web
content to make it mobile friendly. The Mobile Best Practices contain
a technical feature that allows content providers to set a bit if they do not want their content to be transformed by the transforming
proxy. This puts us into an analogy to robots.txt. A content provider
that does not want his content to be transformed is able to reflect
that very easily in this non-transform HTTP header. It is therefor
very important for transcoding proxies to honor this feature.
Ignoring this feature would mean that a transcoding proxy is producing a derivative work out of the content of the initial author
and lacking rights to do so, would commit copyright infringement with
all the consequences that come with it.
I think this is very different from what you heard by some third party about my opinion. This construct gives you an easy way out of the copyright trap by giving the content provider a means to express his wishes technically. At the same time, it is far from saying:
“Copyright is just waived if one publishes on the Web”, I say the
contrary.
So there is a solution to the copyright discussion concerning
transcoders. Feel free to forward this email to the appropriate
people or mailing-lists.
Best,
Rigo Wenning
W3C Legal Counsel
———————————————————
Now, I personally disagree with the fact that Cache-Control is equivalent to robot.txt (robot.txt you place there and forget about it. cache-control can disrupt applications which used to work perfectly up to one day earlier, not to mention a few other reasons), but this is not the main point in Wenning’s message. The main point is that transcoding against the will of the content owner “would mean that a transcoding proxy is producing a derivative work out of the content of the initial author and lacking rights to do so, would commit copyright infringement with all the consequences that come with it”.
Now, this is of course what I have been saying all the way. Content owners are kings over their content. Full stop. The opposite of what you have been claiming so far on the base of Wenning’s “transcoded” opinion.
Luca
> Luca – by your argument, your quoting of that CNN text
> is itself a breach of copyright and therefore illegal.
> I’m sure that, like me, you don’t consider that
> to be the case.
Not so. Because quoting of smaller portions of copyrighted text is covered by “fair use” in the US and in many other countries (UK and Italy included, I believe). According to fair use, I do not need to look for content owner approval in a case like this.
> Otherwise, all proxies, caches and transcoders would
> appear to be illegal.
Potentially, they are.
> To me, accessing web content transcoded into a form
> suitable for mobile consumption isn’t misuse, unless
>
> (a) there is a mobile version I should’ve had
> instead and was prevented from seeing or
> (b) I knew there was a mobile version but said
> I didn’t want it
I suspect (b) contradicts what you have been claiming so far.
Anyway, my point is that transcoding is misuse. Opt-in transcoding is more likely to make the infringement small enough that no real harm is made to the rights of content owners. Enforced transcoding is potentially much more serious. I object from the viewpoint of copyright and I object from the viewpoint of not being able to control the user experience given the knowledge of the device in the hands of the user.
Luca
Luca – by your argument, your quoting of that CNN text is itself a breach of copyright and therefore illegal. I’m sure that, like me, you don’t consider that to be the case.
I’ll repeat my point: legal issues are best contended by lawyers, and the only legal advice we have available to us is that transcoding is acceptable. Otherwise, all proxies, caches and transcoders would appear to be illegal.
To me, accessing web content transcoded into a form suitable for mobile consumption isn’t misuse, unless
(a) there is a mobile version I should’ve had instead and was prevented from seeing or
(b) I knew there was a mobile version but said I didn’t want it
@Tom
> Until we get another legal opinion on the matter [..]
> W3Cs legal counsel is the only qualified opinion
> we’ve seen.
I beg to differ. If I publish stuff on the internet, I still retain the copyright. Look at CNN:
http://edition.cnn.com/interactive_legal.html
“CNN Interactive contains copyrighted material, trademarks and other proprietary information, including, but not limited to, text, software, photos, video, graphics, music and sound, and the entire contents of CNN Interactive are copyrighted as a collective work under the United States copyright laws. CNN owns a copyright in the selection, coordination, arrangement and enhancement of such content, as well as in the content original to it. Subscriber may not modify, publish, transmit, participate in the transfer or sale, create derivative works, or in any way exploit, any of the content, in whole or in part. Subscriber may download copyrighted material for Subscriber’s personal use only. Except as otherwise expressly permitted under copyright law, no copying, redistribution, retransmission, publication or commercial exploitation of downloaded material will be permitted without the express permission of CNN and the copyright owner. In the event of any permitted copying, redistribution or publication of copyrighted material, no changes in or deletion of author attribution, trademark legend or copyright notice shall be made. Subscriber acknowledges that it does not acquire any ownership rights by downloading copyrighted material.”
are you telling me that Turner (and hundreds of thousands of other companies) did not get qualified legal advice?
> So I completely refute your notion that “the web can
> only work as long as people who invest resources to
> create/acquire content can protect it legally from
> those who have not paid for it”
when I wrote “paid for it”, I did not mean to be literal. There’s lots of free content on the web (which still the content owner may not want anyone to mis-use, such as in tanscoding) and there is a lot of content which you pay for by being exposed to advertisement (which transcoders happily strip off, in some cases even injecting their own ads with no reward for the content owner)
Luca
Until we get another legal opinion on the matter (gowan – your corporate masters must have a lawyer!) W3Cs legal counsel is the only qualified opinion we’ve seen.
Sorry it was second hand but I don’t keep him in a cage here ;) If it’s any consolation I’d expect the W3C legal counsel to be pretty well-informed on digital rights issues.
I don’t see what this has to do with protecting clients rights. If you put content online and don’t protect it using available means (e.g. no-transform), you can’t expect it to be protected. The fact that HTTP says “prevent transformation by doing something special” indicates to me that permission to transform is there by default.
Now from a users perspective I’d agree with you this is bad – hence our agreeing that transcoding should be opt-in not opt-out. But from a publishers perspective: if you don’t want your content transformed, there is a mechanism to prevent this, which all transcoder software must support.
Unless you’re prepared to take legal advice and publish it here, I’d suggest leaving legal issues aside. You and I aren’t qualified to discuss these any more than lawyers are qualified to discuss the ins and outs of building mobile web applications.
(As a side-argument: I’d say one of the reasons the web is here today is that early implementations optimised for ease-of-use and ease of participation, not protection of digital rights. So I completely refute your notion that “the web can only work as long as people who invest resources to create/acquire content can protect it legally from those who have not paid for it”)
@Tom
> As far as copyright objections go, the only qualified
> legal advice I’ve seen on this matter is that transcoding
> web content is not a breach of copyright law.
well, actually I (and you!) have seen tens of people repeat that transcoders should keep their hands off their content because they are in contravention of the usage T&C published on those very same web sites that get transcoded.
I have also heard a second-hand opinion of the contrary, but that was only one and it was reported by you: a W3C legal counsel of some kind stated that when you publish on the Internet you lose the copyright. Honestly, it seems to me like that guy is walking on pretty thin ice.
> If it were a breach of copyright law, *all* transcoding
> (whether formatting, repaginating, adding advertising,
> whatever) should be considered illegal.
which is the reality. Of course, there is crime and crime. In virtually all countries, if you kill someone, the state will come after you no matter what. If you infringe on someone’s copyright, though, it will be the responsibility of the copyright owner to bring you to court. Two very different level of illegality. With transcoding, we talk about the second kind.
The fact that suing someone may not make much business sense in many situations (also in the ones you mention, for example) does not mean that copyright owners have forfeited their rights and cannot come after you at a later stage if they wish.
You can turn this the way you want, Tom, but the web can only work as long as people who invest resources to create/acquire content can protect it legally from those who have not paid for it.
Luca
Jose: I’m not supporting “transcoding by default” as your German/English analogy suggests, and I’m not supporting the actions of Novarra/Vodafone.
Luca: I understand your point, but disagree with it.
I am not suggesting that content owners do not own their content. As far as copyright objections go, the only qualified legal advice I’ve seen on this matter is that transcoding web content is not a breach of copyright law.
If it were a breach of copyright law, *all* transcoding (whether formatting, repaginating, adding advertising, whatever) should be considered illegal.
What’s the problem with the use case @legalien outlined? It seems completely legitimate to me.
> Eduardo: I agree, content owners should be able to
> restrict transformation of their content.
> They can do so with no-transform.
No way. Lack of the no-transform headers is NOT an indication that content owners have agreed to be transcoded.
> Allowing replacement of the UA header in extremely
> limited conditions (e.g. when explicitly requested
> by the user),
there you go again. No, no and no. Allowing UA header replacement in limited conditions gives novarra and excuse to transcode everytime. So, no. You are not going to bring back the discussion like this. The UA header MUST not be spoofed.
If a user installs or uses a tools that spoofs the UA, they do it with the same legal background with which they would install p2p and download content which may be infringing on someone’s copyright. In this situation, Novarra is like old-style Napster when they tried to win legal-support for their “service”
> the W3Cs legal counsel seems to have given it the OK though
> (see the thread “Legal Advice on Transcoding”, in WMLProgamming).
In that thread, you brought the W3C’s viepoint, and others argued (and I agree 100%) that W3C’s legal counsel is bogus. How can you seriously argue that by publishing content on the Internet content owners forfeit the copyright?
Luca
Eduardo: I agree, content owners should be able to restrict transformation of their content. They can do so with no-transform.
I think the discussion here is over something slightly different: whether transcoding of web content should *ever* be allowed when a mobile alternative is available.
By disallowing any replacement of the User-Agent header, one asserts that this is not permitted. Allowing replacement of the UA header in extremely limited conditions (e.g. when explicitly requested by the user), the use case provided by @legalien above – which I think is compelling – is handled.
I’ll leave the legal aspects of this to lawyers; the W3Cs legal counsel seems to have given it the OK though (see the thread “Legal Advice on Transcoding”, in WMLProgamming).
@Tom
Tom, how many times will I need to repeat the same concept before you stop raising objections which imply that you have not understood my point?
> Luca – is a user accessing via mobile not a user with
> a need to view content in a mobile context?
yes, but, again, it’s up to the content owner to decide whether this user should get service or not.
Now, if you object that content owners do not own their content once they publish it, that’s an OK objection and we just need to wait until the day when this will be discussed in court.
If you object with yet another use case in which users may need transcoded content one way or another, then you are just wasting my and your time.
Luca
@Tom
It took me a while to get to the end of this thread. Hugh!
Tom, and others on the “user has right to anything camp” (I know a little hyperbole). I do not buy it. First of all, if I go to a website and it does not have the info that I want I will go to some other one that does. That is how the market economy works you vote with your feet.
About the analogy with translation sites. One thing is for the user to use a translation site and laugh at how bad the translation is. Another very different would be for a German ISP to have a proxy in-between all English sites and return badly mangled German translations of the site as if it were the original content.
In such a case the user would be laughing at the site owners who may have lost any possibility of doing business with the user due to the bad impression.
This is what transcoders like Novarra are doing and the damage they are causing to the internet businesses out there. Novarra/Vodafone is not the one blamed for the bad service, it is the vendor at the end of the line.
Now, this is why the content owners want to keep the rights to their content. And that is why there are copyright laws. If vodafone and novarra want to be an agent in the middle they need go and negotiate a commercial agreement with the vendors that gives them the right to change their content (add advertising or whatever). That is what responsible companies do when they use someone else’s data.
Jose A
Another invisible developer
This discussion is getting long-winded, but anyway: usage of content published on the Internet is not entirely controllable, but it is not entirely freely either. Content providers have legitimate reasons to restrict how their Web sites are accessed. Some examples:
a) Reformatting and translation. In its IPR conditions, the W3C does not allow free reformatting or translations of its documents. One has to ask W3C’s authorization, and, if granted, publish the new version with a disclaimer, and put a link to the original, authoritative version. This makes sense, as W3C’s documents are standards — one does not want to introduce confusion or scope for deviations. Other organizations will have similar restrictions for similar reasons.
b) Terminal. In the course of my Web-browsing, I stumbled in the past on a couple of sites that would refuse to serve end-users accessing them with anything else than a few specific browser versions. One of these sites was a banking site. Again, this is not unreasonable: because of security and liability issues, a firm is entitled to refuse to serve customers that do not fulfil requirements for its Internet services.
c) User characteristics. More generally, all banks with an Internet presence take provisions to restrict access to users depending on their country. As an example, one is not allowed to access Swiss Web banking sites from the USA, UK, Spain, Germany, Italy, etc. A similar situations occurs for music: (legal) Web download sites may have different catalogues depending on the country — because of distribution and copyright restrictions which they are bound to enforce.
d) Trademarks. Several years ago, during the developing of a content adaptation system, I was warned by customer account managers about taking lightly the conversion of images of corporate logos. Some (large) firms are very particular about protecting their trademarks. Converting a colour logo to a monochrome one was a no-no: those firms have official black-and-white logos that must be used in relation with their sites. Again, not an unreasonable limitation.
e) Derivative works. Taking content and transforming it in any way, or making it available via other channels means threading onto delicate copyright areas, especially since media is (1) increasingly digital (which blurs the frontiers between various distribution channels) and (2) usually protected by licensing restrictions. Grabbing Web podcasts and reformatting them for airing on digital radio, or automatically transcoding a sports desktop WWW site for access via mobile phones may entail thorny and costly legal matters.
Web content is not as tightly controllable and rigidly protected as Luca’s statements might imply, but as soon as an organization’s digital information assets are in question, one cannot deal with them carelessly. And as the saying goes, the devil hides in the details.
E.Casais
I don’t think we want to dismiss people’s opinions because they used to work for transcoder (or WAP gateway!) companies. Transcoding is a pain: we all know that. I can’t fix it unilaterally – and nor can anyone else. In the meantime, I can only make another plea to deal with it constructively.
Anyway… I’m glad that the conversation has become slightly more interesting and considered.
I do think we await some case law. Even a large operator would listen to a cease-and-desist from a suitably powerful media brand.
(Oh, and if you’re reading this and a lawyer for suitably powerful media brand, please do ask yourself if you’re happy not to have the final say in your customers’ experience.)
Luca – is a user accessing via mobile not a user with a need to view content in a mobile context?
I’m not in a position to propose exceptions to copyright regulations, and the only legal opinion we’ve been able to get on the matter (from the W3C counsel) is that transcoding web content isn’t a breach of copyright.
If a transcoder messes up designed-for-mobile experiences without explicitly being asked to by the user (as Vodafone/Novarra did): bad.
If a transcoder gives a trancoded web version because the user asked it to (and we can see use cases for this): good.
If a transcoder gives a proper made-for-mobile version by default to mobile users: excellent.
> I know it’s not quite the same thing, but it seems
> there’s some equivalence here. Lots of users might
> want to consume content in different ways in
> different contexts.
There are two objections to what you are saying, but first observe that your objection does NOT contradict what I have been saying in previous posts. If you are a user with specific needs and go all the way to find tools and tricks to workaround specific limitations of a website, this is something you can get away with (i.e. tollerated by content owners).
When a company uses this need to arrogate rights on someone else’s content, then it’s not acceptable anymore.
Specifically on your points, if a company decides that their English content should not be translated to French without authorization, they have a right to do it. The fact that Google allows users to translate it does not mean that the content owner has forfeited its rights on the content. Of course, if a French ISP decided to automatically translate all English websites accessed by French users, this would irritate many, which is very similar to what is happening with Novarra and mobile content.
But there is also another point: if we allow exceptions to copyright regulations in the name of a not-better-defined openess, what would prevent Novarra and other ill-intentioned company to exploit it in extreme ways (both on mobile and big web) as we have seen with transcoding?
Luca
Is there not a presumption of openness and universal access when it comes to web content? If we don’t allow mobile users to access web content, what about users who are, say, vision-impaired and need screen readers – is it OK to transcode content for them? What about services that provide translation of content – if I publish my content in English, am I implicitly saying it shouldn’t be available in French? If I publish in XHTML-MP, am I saying my content shouldn’t be available in WML? I don’t think so.
I know it’s not quite the same thing, but it seems there’s some equivalence here. Lots of users might want to consume content in different ways in different contexts. As long as we’re not *forcing* them to consume it in an inappropriate context that goes against the wishes of the content provider (as Vodafone/Novarra did), I don’t think any harm is being done.
> But if a user wants a (probably comparitively crappy)
> transcoded web experience, I think that they should
> be allowed it.
What do you mean with “should”? are you saying that they are supposed to be enabled technically or are you saying that they are supposed to be legitimated?
My point is that users are not entitled to access content unless the content owner allows them to. If they find tricks to work around this limitation, they will probably be tollerated, but this does not mean that they have a right to do it.
IMO, this would strike a correct balance between the rights of content owner and the prerogatives of end-users. Postulating that users have some kind of natural right to transcoding of arbitrary internet content would open up to justification of abusive business models (and the kind of abuse we have seen with transcoders is a great example of this)
Luca
OK, speaking personally: that doesn’t chime with my idea of the web is. If a content provider stops me from accessing their site because I’m using IE, fine: I can write a browser or use a firefox plugin that pretends to be Mozilla. And it’s OK for me to do this (there’s a long history of such stuff going on). If I want to write a proxy which helps other people access this content, that’s fine too – as long as they’re choosing to use it, and aren’t forced to.
Similarly, if a content provider wants to provide a mobile-enhanced experience, that’s great (and given that I make my living helping people do this, I’m a big fan of it!). But if a user wants a (probably comparitively crappy) transcoded web experience, I think that they should be allowed it.
Whether they’re doing it via a transcoder owned by Vodafone or Opera doesn’t matter to me. As long as the user isn’t *forced* into having this experience, in the way that Vodafone/Novarra did for millions of users, I think it’s OK. So to me it all comes down to opt-in vs opt-out.
I think you and I agree on the point that all transcoding should be opt-in not opt-out, but you feel CTG is worded too weakly, and allows loopholes.
In which case I’d say, let’s get the wording stronger!
You ask a straight question, I’ll provide a straight answer.
> if I, as a user, want a transcoded full-web site,
> shouldn’t I be able to get it?
Short answer: No you shouldn’t.
Long Answer: No, you shouldn’t. Unless the content owner has either created or allowed this “feature”. Of course, you as a user may install OperaMini, Skyfire or turn to a opt-in transcoder of some kind. In that case, it would be inpractical/impossible/not-desirable for content owners to prevent transcoding, but this would mean that you as a user are taking the responsability of what you are doing, not very differently (albeit probably much less serious) from the case where someone uses p2p to download MP3 music. You are accessing content in non-legitimate ways.
Of course, nothing prevents users from requesting that extra features are added to a mobile site or that web and mobile URLS are kept separated. Another common pattern is to add an “access full web site” link prominently on the mobile home page, which would be a totally legitimate request from users to content owners.
Luca
Luca: I’m not disagreeing with you that content owners have rights. I’m saying that they’re not the only people who have rights.
But back to my question: if I, as a user, want a transcoded full-web site, shouldn’t I be able to get it?
I definitely agree with you that I shouldn’t get the transcoded web *by default* (as per Novarra/Vodafone) – this is definitely wrong, IMHO. But if there’s content there that I want to access from my phone… why should I be prevented from getting it?
@tom
> I’m not sure that I’d agree with you that content
> owners are the only people who have “rights” online
> (whatever such rights are).
Tom, here you are blatantly misrepresenting what I wrote. I wrote that online content owners have rights about how their content is presented. This is not a very hard concept to grasp, is it? it is the foundation of copyright regulations on the Internet.
If I create a poem and I decide that only users of FirerFox 3.0.3.1 on Ubuntu Hardy can read it, it is my right to do so. You may question the logic behind the choice, but not that it’s my right to do it. Sure, someone can publish the poem on another site, but I still have the right to tell them to remove it, and take them to court if they don’t. It’s my content. Not yours. Not the users’ content. The fact that someone can work around my rights technologically, does not mean that they have the legal rights to do it.
Now, if content owners want to make their full website available through transcoding, they have the right to do it.
If transcoder vendors extort web content by tricking a site and making it return web content, they are abusing.
To me, this is a simple concept.
Luca
Luca
I only have one question: as a mobile user, if I specifically want a transcoded full-web site as opposed to a mobile site should I be able to get one?
I’d not seen a use case for this before, but @legalien came up with a quite succinct one: full web sites are quite content-heavy compared to your typical mobile sites. Most of the time this is fine, but there are times when you might want full-web content on the move.
With web servers multiserving content on the basis of user agent, is there any other way I, as a mobile user, could get full-web content other than by using a proxy which changes the user-agent header on my behalf?
If this is reasonable – and it seems reasonable to me – then I’m sure we can tighten up the language of CTG to avoid the sort of abuses which you and I are concerned about (Vodafone/Novarra-style automated transcoding that users have to opt out of) whilst leaving this use case handled.
I’m not sure that I’d agree with you that content owners are the only people who have “rights” online (whatever such rights are). What about end-users?
@Tom
Time to go back and provide a little “history of transcoding”. Transcoders have not been invented in 2007. They have existed at least 5 years before that. Personally I was involved with transcoders in 2005. At the time though, I did not have these strong viewpoints about transcoders as I have now. Not because I changed my mind, but simply because transcoders were being marketed as tools that operators might deploy on their portals to allow users to transcode their favorite web sites, and not as a gateway to everything HTTP. In other words, transcoders were opt-in in the fullest sense of the term. If a user wanted to transcode a website, they just needed to type their URL in a field on the operator deck and off they went.
Because of the inherently opt-in nature of transcoders at the time, there was no need to for CTG or the Manifesto. Mobile sites were preserved and transcoded web-content was a small enough phenomenon that web sites could heppily ignore it (just a negligible fraction of their daily traffic).
Fast forwards to 2007. Novarra and other transcoders go for more aggressive marketing. They figured that transcoders could be sold to gullible operators as the best thing since sliced bread, an amazing technology able to boost data traffic through the roof. In this view, mobile-optimised sites were just a bunch of cavemen which were bound to disappear in the name of a magnificently reformatted full-web.
This is the context in which the battle began. In this context, it is clear that the battle is not really on standards, but on a more general question: do content owners have a right to own their content or not? Figure out that particular point and the standards to support it will be very simple to derive (which explains why the Manifesto came about so rapidly, while CTG has not,the ambiguity of CTG purpose is killing it).
So, let’s face the question again: “do content owners have a right to own their content or not?”
I and my not-really-invisible army of developers think that *content ownership is sacrosaint!*
Legalalien and Novarra think that *web and mobile content are cannon fodder* and, once published, content owners have no right to decide how it is presented to users.
What about you? where do you draw a line?
Now, with this background, I will address the point that you and legalalien have raised in your last respective comments, but only after I have assumed that we all agree that content owners have a right to decide how their content is presented, because there is no compromise there. Anything else just means conflict until one of the parties (transcoders vs the rest of the mobile ecosystem) succumbs.
Question: assuming that content owners have a right to decide how their content is presented, how do we support a user’s desire to access a transcoded site?
The solution has been there all the time. Place the transcoder entry point on the operator deck and off you go. Very easy, isn’t it? “but transcoders would be make much less money?!” I hear you say. Well, but this is exactly my point. Why should W3C create standards that serve the interest of those sitting at the table against the interest of the whole community?
But I will not stop here. Always with the assumption that content ownership is sacred, what if content owners themselves wanted to present a reformatted view of their web sites? In this case, transcoding would be legitimate, because it would be the choice of those who have the right to choose.
Of course, there would be a different technical problem at this point: how would a content owner know which transcoder to turn to? and what if they wanted to rely on the transcoder that different operators may have installed?
Well, this is what OMA or W3C might want to create specifications about. For example, CTG might specify that the operators adds the transcoder info in HTTP headers:
Via: Novarra-Vision/7.3
X-Transcoder-Offered: true
X-Transcoder-EntryPoint: http://novarra.dmz.telco.com/tr/url=
at this point, content owners could create their
http://www.company.com sites which recognize mobile devices through the original HTTP header they use today, while being given a chance to offer a transcoded version with something as simple as:
<c:if test=”${transcoderOffered}”>
<a href=”${transcoderEntryPoint}http://www.ba.com”>Access full Site</a>
</c:if>
Now, this would be a more reasonable activity for W3C to carry out (and not the mess of UA-spoofing exceptions which allow abusive transcoders to abuse). As an added bonus, it would also be compatible with the Manifesto, which never hurts.
Luca
James Pearce’s comment about beating each other up is pretty describing :) It’s all about adapting to the current environment. Developers are good at this. Developers are trying to satisfy the customers need, and when the developers are not able to because someone changes the way Internet usually works, of course there will be some beating. Sooner or later the developers will adapt tot he current world, but my fear is that this will affect the consumers of mobile services negatively. In my experience the Manifesto is one of the more important contributions. The “army” of developers behind it is to me very visible, at least in the Scandinavian region (not surprised if its less visible is US..) As an example I can mention the TeliaSonera case in Sweden (http://mtny.mobi/16). The pressure from the mobile business, both from developers and commercial parties, resulted in a change of direction for TeliaSonera. More changes will come, all in line with the Manifesto. So in this never-ending-discussion, I believe it is important to know that this issue is regional rather than global, and not only an army of developers but also an army of commercial forces behind it.
Actually, I find legalien’s point quite compelling: mobile sites are typically less content-rich and more goal-driven than web sites – they offer what makes sense in a mobile context. This is certainly the kind of advice that we offer our clients: don’t just take web content and stick it on a smaller screen, but make sure you’re taking into account the context in which the site is likely to be used.
In which case, if you do want access to the full content of a web site on the move – for instance, if you want to find corporate information about BA.com instead of a list of departure times – then you *need* to be able to specifically request the full web site, instead of the mobile version.
In this case, it is in the users interest to be able to ask for a transcoded experience; and if the only way of getting this is by UA spoofing, this would seem to be a realistic use case for where this is a good thing. I’d not considered this before, but I’m guessing this might be one reason why the exception exists in the CTG document.
Given that we have a use case for where this is necessary, it seems worth tightening up any loophole that might exist in the language. How could this be done, to allow for the use case where users explicitly ask for a transcoded web site but not leave transcoder deployments able to transcode-by-default? My reading of CTG is that it doesn’t allow for this (saying as it does that users must “specifically request” a transcoded site) – so where is the room for misintepretation here?
I think the whole logic of UA spoofing is… well… illogical, unless you’re intentionally trying to trick the site into giving you content which it fundamentally thinks it doesn’t want to give you.
If a site reacts differently to a user agent, then that site is obviously doing some form of device or UA analysis: in which case it’s probably fair to assume in this day and age that they’re either using something like WURFL, or have a hardcoded list of acceptable browsers like in the days of non-MSIE compatibility.
Given that, it’s logical to say that UA Spoofing is unnecessary for sites which do not detect the UA. The transcoder should just transcode the HTML coming back.
If the site refuses access, then it has a reason:
1) Browser unrecognised (site too dumb)
OR
2) Device recognised as mobile (site too smart)
I would have thought that (2) was rare: if a programmer knows he’s seeing a mobile device, he’d redirect or serve appropriately.
Thus the only use-case where spoofing is of use is (2): in which case, you’re gatecrashing your way into a dumb site. Use case (1) is deliberately circumventing a programmer’s efforts to serve users.
Thus the only use-case of UA spoofing in which it’s actually of use is the case of the site expecting a known terrestrial browser and failing to recognise the one visiting: which I would contest is rare enough (and non-existent in the popular sites that most people would visit).
So the logical conclusion is that UA Spoofing is unnecessary, and is the result of lazy programming.
Logical table:
Mobile UA -> dumb site -> HTML -> transcode it -> user
Mobile UA -> ua aware site -> accept -> site serves content appropriate for device
Mobile UA -> ua aware site -> reject -> redirect or nasty error message
Of course, the real issue here is trying to deceive websites: in which case I’d contend that it’s arrogant to assume that you have a right to do so. It’s not arrogant for a site owner to refuse access to certain users: it’s their website. They have every right, even if the decision is dumb. They live or die by that, but it’s their decision to make.
And of course, it would be easy to patch WURFL for Novarra (and in fact, Novarra strings are already in there). But it screws up unique user counts, site statistics, and any number of other things a site owner gets their intelligence from.
The issue here is that there doesn’t seem to be a technical reason why the UA must be spoofed unless you’re trying to trick a mobile site into giving you web-based content.
Chris