Picture
Indian mobile carrier Reliance Communications is considering a merger with South Africa’s MTN or roping in a strategic foreign investor to raise funds, the Economic Times reported on Thursday.

The No. 2 Indian mobile operator is ready to sell a stake in the company to raise funds to finance its foray into 3G mobile and wireless broadband access, the newspaper said, citing a person familiar with the plans.

At an MTN board meeting on Tuesday, the South African telecoms major decided it would examine the Reliance Communications’ merger proposal, the paper quoted the person as saying.

Officials at Reliance Communications and MTN could not immediately be reached by Reuters for comment.

Abu Dhabi’s Etisalat said on Wednesday it was looking to buy a stake in an Indian mobile operator, but did not disclose any names. A newspaper had reported it was in talks with cash-hungry Reliance Communications for a $3.8 billion deal.

Reliance Communications and MTN had planned a tie-up in 2008 but the deal was thwarted by issues within the Reliance family.

Source : Business Standard

 
Picture
The Central Electricity Regulatory Commission (CERC) has approved the state-run PowerGrid Corporation’s plan to set up nine High Capacity Power Transmission Corridors (HCPTC) costing Rs 58,061 crore. These transmission systems will evacuate power from various projects planned by independent power producers (IPPs).

The transmission corridors will evacuate power from IPPs in Orissa (to cost Rs 8,752 crore), Jharkhand (Rs 5,709 crore), Sikkim (Rs 1,304 crore), Madhya Pradesh and Chhattisgarh (Rs 1,243 crore), Chhattisgarh (Rs 28,824 crore), Krishnapatnam area in Andhra Pradesh (Rs 2,065 crore), Srikakulam area in Andhra Pradesh (Rs 2,986 crore), Tamil Nadu (Rs 2,357 crore) and southern region (Rs 4,821 crore).

CERC, in its order, observed that development of the corridors was considered necessary for evacuation of power from the projects envisaged during the 11th Plan.

The power shortage in the country in 2009-10, according to a Central Electricity Authority report, is 10.1 per cent in energy terms and 12.7 per cent in peak demand terms.

“The Commission, based on the report furnished by PowerGrid Corporation, which is a central transmission utility (CTU) on physical progress of Generating Units of IPPs, is satisfied that these High Capacity Transmission corridors are required for evacuation of the power from these IPPs and any delay in implementation of these transmission schemes may result in bottling up of the power,” the power regulator said.

The power projects are located either in the coal belt, or in coastal areas (which would use imported coal) or in the hydro power potential areas of the North-East. “Power from these projects has to be brought to the load centres in the northern and western regions, which requires development of transmission systems,” the power regulator added.

CERC has directed PowerGrid Corporation to ensure that the proposed transmission projects for which regulatory approval has been granted are executed within the time frames matching the commissioning schedules of the IPPs so that the beneficiaries are not burdened with higher interest during construction.

Source : Economic Times

 
Picture
MUMBAI | NEW DELHI: Sasol, the largest producer of motor fuel made from coal, plans to spend $10 billion in India in partnership with the Tata Group on a block awarded last year, following similar investments in Indonesia and China.
The South African company plans to produce 80,000 barrels a day of motor fuel by 2018 from a coal block in Orissa, Mark Schnell, president of the company’s Indian unit, said in an interview in Mumbai. Sasol and Tata Group own equal stakes in the venture, he said. “It’s going to be a mega project of the magnitude of $10 billion by the joint venture,” Mr Schnell said. “At this stage, the focus is on understanding the resource and making sure of the economics of building a plant here.”

Rising incomes in India are driving vehicle sales, boosting fuel demand in India. The country’s energy use may more than double by 2030 to the equivalent of 833 million metric tonnes of oil from 2007, according to the Paris-based International Energy Agency. “That is a tremendous amount of money and a project like that will become viable at very high crude prices,” said Victor Shum, a Singapore-based senior principal at US energy consultants Purvin & Gertz. “If the alternative of producing fuels from crude oil is cheaper, then a refinery would make more sense.” Sasol and the Tata Group were awarded the coal-to-liquids project in Orissa, Tata said in March last year. “We feel that this is a right step toward securing energy security for the country,” Tatas said.

Jindal Steel & Power said in March last year it was allotted a coal-to-liquids block in Orissa. The project will produce 80,000 barrels of fuel a day from coal and is estimated to cost Rs 42,000 crore, including mining and a power plant. India’s production of gasoline rose 32% to the equivalent of about 422,800 barrels a day and diesel output rose 12% to about 1.4 million barrels a day in the year ended March.
Sasol is considering increasing the capacity of a similar plant in China with Shenhua Group by 13% to 90,000 barrels a day, chief executive officer Pat Davies said. The cost of the plant with Shenhua is less than $10 billion, he said.

The South African company signed a memorandum of understanding with Indonesia for the possible development of an 80,000 barrel-a-day coal-to-fuel plant in the Asian country, Sasol said. In January 2009, Bukin Daulay, head of coal and mineral research at Indonesia’s energy ministry, said Sasol could spend $10 billion on the plant. Sasol, which produces over 40% of South Africa’s motor fuel, uses technology first employed by Nazi scientists and refined by apartheid-era engineers. The company plans to build new coal-to-fuel plants in the US, China and India.

Source : Economic Times

 
Picture


It’s been just over a week since Microsoft has started to offer the commercial subscription of Windows Azure. As a Cloud enthusiast, I quickly signed up for the
Introductory Special subscription and downloaded the required tools to deploy my first app to the Cloud. I had access to Windows Azure since PDC08 and I deployed quite a few apps to test the functionality and features of the platform. As an independent Cloud Computing Strategist, I also explore Amazon Web Services and Google App Engine. Having worked on a few mature Cloud Computing platforms and tools, I had certain expectations from Windows Azure particularly after it has gone past the beta phase. Honestly, there is no significant difference (except the pricing part) that I could notice in the development and deployment experience on Windows Azure.
Here is a list of top 5 things that I feel Microsoft should fix immediately.

5) Metering and Billing Model – Microsoft would start charging the moment you deploy your app even if it is not running and not consuming any resources. Refer to the FAQ on pricing for more details. I personally find this as a huge entry barrier. Ideally, I wouldn’t want to get charged when my app is in ‘Suspended’ mode and not in ‘Run’ mode. Technically speaking, what resources would my app consume when it is not running? It would only be a few megabytes of storage to keep the package and the configuration files. Amazon doesn’t charge me VM hours for inactive AMIs. They only charge me for the nominal storage cost of storing the AMI on S3. Charging VM hours for an idle application is just not convincing enough. I find it to be counterproductive and a barrier for developers to embrace this platform.

4) Simpler Pricing and Subscription Plans – Microsoft is at it again! After confusing the consumers with half-a-dozen flavors of Windows Vista and recently with Windows 7, they are doing it to their Online Services subscribers. There are at least 4 types of subscription to choose from – Introductory Special, Development Accelerator Core, Development Accelerator Extended and ‘Pay As You Go’ Consumption. This bouquet of subscription offers is confusing and not a straightforward model. This reflects the classic Microsoft way of packaging and licensing products in the form of Express, Standard, Professional and Enterprise. This doesn’t gel with the online services model. I personally prefer the Amazon way of pricing and the sign up process. Simple and straightforward!

3) Agile Deployment and Faster Change Management – On the commercial version of Windows Azure, it takes more than 7 minutes for the application to be available and accessible. And, I am not talking of an Azure application built to assist NASA launch the Mars Rover. This is a plain and simple Hello World ASP.NET app that doesn’t even have a single line of server side code. Add a Worker Role, few lines of dynamic code and storage access code, it will take a good 10 minutes for your application to take off. Again, comparing this with Google App Engine and Amazon EC2, they are almost instant. Each Linux EC2 AMI takes about 2 min. to boot and I am at the root prompt in no time. Windows Azure is a PaaS offering and I cannot afford to wait for few minutes every time I make a trivial change to my HTML file. I can understand if Amazon forces me to go through this as I may have to bundle the AMI with every change that I make. With EBS now supporting booting AMIs, I need not even worry about bundling the AMI. Windows Azure might also be doing the job of spinning the new VMs for every change that I make to the app or the configuration. But I don’t care and cannot wait for 10 minutes for the simple changes to reflect. This needs to be fixed!

2) Configuration Editor – After you deploy the Cloud application, you may need to change the configuration settings. This includes changing the no. of instances of Web Role / Worker Role or changing a custom configuration setting. Having a plain vanilla textbox to change the sensitive configuration setting is just not acceptable! Microsoft has every reason to put up a better frontend to manage the configuration settings that can be potentially built using Silverlight. At least that will drive more downloads of Silverlight plugin. One look at the Configuration settings textbox makes me feel that I am working on an early CTP release. I consider the EC2 Console from Amazon (and a bunch of 3rd party tools like ElasticFox) and ElasticHosts’ configuration editor to be much better. Though Microsoft may want to encourage partners like RightScale to eventually build such tools, as a paid subscriber I deserve a better tool here!


1) Windows Azure Integration with Visual Studio
This certainly deserves to be the numero uno. The biggest USP of Microsoft is its integrated platform and tools approach. Whether it is core .NET Development, Microsoft Office Development, BizTalk Adapter Development or SharePoint Development, it has always been the ‘Better Together’ story for the developers by putting Visual Studio in the center. When it comes to Azure, I had a lot of expectations from Microsoft on the Visual Studio integration. The reason for that is Azure is logically an extended development platform on the Cloud and the developers should be able to seamlessly deploy their new breed of applications. To my surprise, Windows Azure Tools for Visual Studio 2008 1.1 has little to no integration with the real Azure platform. When I right click on the Cloud Service project and select Publish, I expected Visual Studio to prompt me for my Windows Live ID to enumerate the Hosted Service project to seamlessly deploy the app onto Azure and then take me to the Development Portal to let me decide between staging and production. The approach of opening an Explorer window with the folder containing the .cspkg and .cscfg files and launching the browser with the Azure development portal appears to be so broken! Compare this with the Eclipse and the Google plugin integration. Two independent entities coming from different companies offer a better experience to the developers than Azure Tools for VS. When I click on the GAE button on Eclipse toolbar, it just does the deployment after prompting me for the Google ID and the Application ID. I am not sure if this would get fixed in Visual Studio 2010. But as of now, on the currently available Visual Studio 2008 SP1, it is not in place. I want Visual Studio to fully support me end to end during my Cloud application design, development, testing and deployment. I also expect an integrated Azure Storage tool within Visual Studio. I should be able to browse and manipulate the Tables, Blob metadata and the Queues. Today I have to rely on 3rd party tools for this which doesn’t make the developer productive. Overall, Visual Studio and Azure integration has a long way to go!

source : MSV


 
Picture

According to Wikipedia, an unmanned aircraft system or unmanned aerial vehicle is an aircraft that flies without human crew on board the aircraft. … a UAV is defined as a reusable, uncrewed vehicle capable of controlled , sustained, level flight and powered by a jet or reciprocating engine.
So, what is the difference between an application running in your datacenter and an application running on the Cloud? Well, it is as different as an aircraft flown by a pilot and an unmanned aircraft system. Interestingly, both the Cloud application and the UAS share the same attributes! When you are running an enterprise application hosted at the datacenter in your backyard, you have a lot of liberty in controlling it. You can monitor it closely to track the performance and it is fairly easy to fix the bottlenecks. Same is the case with an aircraft flown by a professional pilot. He can determine the right altitude and the direction based on the wind speed and the weather conditions. The pilot will take every step to make sure that the flight is as safe as possible.
Running the same enterprise application on the Cloud is no different from flying an unmanned aircraft. You never know which server, datacenter or the continent that hosts your application. For the UAS, imagine the challenges involved in accurately gauging the external factors like the wind speed, direction and the altitude. The external factors that define the smooth operation of the Cloud application are traffic, resource usage and security. You got to tweak the application to meet these demands on the fly!Whether it is the Cloud application or the unmanned aircraft system, they need a sophisticated control center for remote operations. Apart from the remote control center, the constant communication between the control center and the UAS is very critical. UAS Control CenterThough it is obvious that most of the Cloud offerings expose ‘Compute’ and ‘Storage’ services, there is another crucial service which is the ‘Management’ service. This service connects your Cloud App with the control center at your end. By consuming this service, you will be able to tweak your application to meet the external conditions.

Key Services Exposed by Cloud
Amazon Web Services offer a mature API to take control of the infrastructure that runs your application. Microsoft Windows Azure Platform has a Service Management API that lets the developers programmatically control the Cloud application’s parameters.

source : Janakiram (Microsoft)


 
Picture

SalesForce.com and VMware jointly announced a new PaaS offering called VMforce. This is a huge announcement that has a very strong impact on the Cloud ecosystem. Let’s see what it means to us.

The enterprise application development platform is dominated by two obvious platforms – .NET and Java. I qualified my statement with the ‘enterprise’ keyword because other platforms like LAMP, Ruby on Rails, Python are great for consumer web apps and they are not the first choice for building the Line of Business (LoB) applications. So, when the enterprise wants to seriously look at the Cloud, they want a platform exposing either .NET or Java as a service. The application platform on the Cloud is technically called as the Platform as a Service (PaaS). Till date, PaaS is typically associated with Microsoft’s Windows Azure Platform, Google App Engine and Force.com. Windows Azure is the preferred platform for all the .NET developers. In the last one year, Microsoft’s continuous investments in Azure made it comprehensive and mature for the businesses to go live on the Cloud. Java developers had to settle for the limited capabilities offered by Google App Engine. Right from the day of announcement of Java runtime on App Engine, Google did very little to entice the Java community. Moving an enterprise Java app to GAE is not really straight forward. GAE doesn’t support all the capabilities of Java EE. Even for the web applications, there are quite a few constraints that force the developers to re-factor the application to run on GAE. Moving an app back and forth from the local datacenter and GAE is not easy. So, there has been no comparable PaaS offering to Azure for Java developers. In one of my articles, I covered how Sun lost the opportunity of delivering the Java PaaS to the community. This gap is now being filled by VMforce. They want to make VMforce the defacto Cloud platform for Java developers. VMforce for Java developers would be what Azure is for .NET developers. VMforce PaaS Offering

But why did VMware join hands with SalesForce.com? VMware has a proven stack for the Cloud in the form of vSphere and vCloud. They never wanted to compete directly with the IaaS providers like AWS or GoGrid. Instead, VMware wants to capture the Private Cloud market by aggressively competing with Microsoft and others. On the other hand, SalesForce.com has been in Cloud services business for a while and has become synonymous with SaaS. They also started to expose the middle tier that powers their CRM through the force.com PaaS offering. SalesForce.com has the right level of infrastructure that is ready to scale. But just virtualization combined with the right infrastructure doesn’t offer an exciting platform for the developers. VMware made two strategic investments last year. They acquired a Java framework and tools company called SpringSource and a Message-Oriented-Middleware (MOM) company called RabbitMQ. This investment made VMware ready for a complete platform offering. Just like VMware brings an abstraction layer between the real hardware and the OS, SpringSource adds a layer between Java runtime and enterprise applications. Java developers targeting SpringSource can easily move apps across multiple environments. Message Queuing is very important for enterprise application scalability. The combo of SpringSource and RabbitMQ offer a powerful and scalable enterprise Java environment. Now, when we look at the equation, it becomes pretty interesting. VMware offering the SpringSource framework for the on-premise servers and the Private Cloud that can be further extended to the Public Cloud hosted on SalesForce.com. Add the LoB components, the multi-tenant capability, enterprise database and the UI widgets that are already a part of force.com, they have a pretty solid PaaS in the making. Deployment Scenarios

Who should be worried about this announcement? It is Google! They have a serious contender in VMforce. As a developer, I prefer setting up SpringSource environment on my local machine and use Eclipse to seamlessly deploy across my local server, Private Cloud or on the VMforce Public Cloud. I need not heavily re-factor my applications for the Cloud anymore. Relying on a proven Java framework like Spring gives the developers the confidence to standardize their apps across multiple deployment environments. But, should Microsoft be worried about this announcement? Yes. But not as much as Google! Microsoft did the right thing by bringing .NET to the Cloud early and helping the developers make a smooth transition. For any Microsoft shop, the first choice is Azure and Microsoft will continue to lead in that space.

source : Janakiram (Microsoft)


 
Picture

The landscape of the application development platform is divided into two – .NET and Java. When Microsoft announced .NET a decade back, I expected that they would officially come out with an Application Server to compete with the Weblogic and the Webspheres of the world. But Microsoft’s pitch has always been that Windows Server has it all! Every instance of Windows Server can be enabled for an ‘Application Role‘ which includes Web Server (IIS), Development Runtime (.NET), Enterprise Services (COM+), Message Queuing (MSMQ) and Web Services (WCF).

Borland (Borland ES), IBM (WebSphere), Sun/Oracle/BEA (Glassfish, OC4J & WebLogic), Redhat (JBoss), SAP (NetWeaver) and others like Apache (Geronimo) created a niche market for J2EE Application Servers offering the application services running within the Java context. Any enterprise customer deciding to deploy an Application Server will first zero-in on the development platform. If it is Microsoft, the choice is simple; it is Windows Server. But, if the enterprise application is Java based, then there are quite a few Application Servers to choose from. So, the Application Server market is primarily divided between .NET and J2EE.

Fast forward this to 2010 and I feel this is pretty much repeated in the Cloud within the PaaS landscape. After announcing a partnership with SalesForce.com, VMware has made another huge announcement at the Google I/O event. Google App Engine now supports Spring framework powered by VMware! I always complained that Google’s App Engine is limited in its capability and the massive re-factoring that has to be done for porting an application to GAE. With Google and VMware springing (pun intended) the surprise, both these concerns are addressed. Now any Java developer can download and setup the Spring environment on his/her machine and then target Google App Engine for deployment.

When I first read about VMware’s Open PaaS vision, I have to admit that I didn’t take it too seriously. But now that they are on a signing spree with the partners to support Spring on their respective Cloud environments, it looks very promising. Every Java developer can now choose to deploy either on SalesForce or Google App Engine. I have a feeling that VMware is talking to IBM, Oracle and others who have the potential to make it big on the Cloud. What is more exciting is that the enterprises can setup a Private Cloud running on VMware’s VSphere running the same PaaS and then deploy and switch across multiple Cloud vendors. If VMware succeeds in convincing every major Java PaaS vendor to support Spring, it can safely claim to have created an Open PaaS platform. Spring insulates the Java Cloud applications from the underlying PaaS and brings in portability. This delivers the much talked about Cloud Portability at least in the Java PaaS environments.

Five years from now, the PaaS world would be again divided between Microsoft (Windows Azure) and rest of the world (Java PaaS potentially powered by Spring).

source : Janakiram (Microsoft)


 
Picture
The development of the Delhi-Mumbai Industrial Corridor (DMIC), with an investment of over $110 billion over the next ten years, could bolster India’s chances of becoming the global workshop for geotechnologies.

Union Minister for Commerce and Industry Anand Sharma said: “India has the potential to become the workshop of new technology. The late eighties and early nineties came with the big boom in communication and information technology. India quickly moved in at the high end. The development of the industrial corridor will bring about a revolution in the use of green technologies.”

The DMIC envisages setting up of investment regions in Dadri-Noida-Ghaziabad in Uttar Pradesh, Maneswar-Bawal in Haryana, Khushkhera-Bhiwadi-Neemrana in Rajasthan, Bharuch-Dahej in Gujarat, Igatpuri-Nashik-Sinnar in Maharashtra and Pitampura-Dhar-Mhow in Madhya Pradesh. An industrial area is also planned around the Dighi port in Maharashtra.

The land acquisition process has been initiated and the finances are being worked out. Of the $ 100 billion dollars required for development, India and Japan have committed $100 million each.

However, the government does not want DMIC to become an isolated activity “but move within multiple new policy frameworks of the government” and come up with products and innovations in green technology.

The government is already working on a national manufacturing policy to help increase GDP contribution from the sector from the existing 15 per cent to 25 per cent. It has already decided to set up the first National Manufacturing and Investment Zone (NMIZ) in Rajasthan along the DMIC to boost the manufacturing sector.

Consistent with these policies, the Cabinet has also cleared the setting up of an enterprise, Invest India, in which the government will have 49 per cent equity and FICCI will have 51 per cent. The organisation is already operational and will undertake missions to sensitise investors and have focal points in states to co-ordinate with them, says Sharma.

To hasten sanctions and clearances at the state level, Sharma said a conference of state industry ministers was organised to discuss a way out to better cooperation. The conference looked at bringing uniformity and simplification of rules.

Amitabh Kant, chief executive and managing director of DMICDC, said, cities and industrial regions planned along the corridor, will be built using smart technologies.

However, there has been a delay in implementing the world’s largest infrastructure development initiative. The first phase covering 12 nodes was initially scheduled for commissioning by 2013 and the remaining 12 investment regions and industrial areas were supposed to be developed by 2018. Due to procedural delays, it has been decided that seven nodes will be completed by 2018.

Seventy-five per cent of the project, Kant says, will be developed on public-private partnership (PPP). The finances will be sourced through Overseas Development Assistance (ODA) loans from Japan and resources raised through institutional bonds. Officials claim a major portion of the project would be ready for commissioning in nine years.

“We are an aspirational country. Today we see a commitment both at the industry and government level. This country has a vision and self confidence. We are investing in institutions and developing human resources. I am optimistic,” said Sharma.

Source: Business Standard