Just sharing some of my inconsequential lunch conversations with you... RSS  

Friday, July 31, 2009

These days, people outsource just about anything!

Here’s a cool example that just popped out on google ad on devcatharsis: ScienceOps. What do they do:

ScienceOps creates, validates, and optimizes algorithms.

Our team of Ph.D. level scientists provides algorithm solutions for your business needs.
Enable your business critical code to produce required results faster, allowing better decisions and opening up new opportunities for your business.

  • Algorithm Development as Firm Fixed Price Contracts
  • 3rd Party Algorithm Validation
  • Algorithm Optimization
  • Statistical Data Analysis
  • Data Mining
  • Complex Algorithms
  • Scientific Algorithms
  • Custom Algorithms
  • Rapid Algorithm development
  • Algorithm Design

Customized Algorithms can read, search, analyze large amounts of information, make predictions, are able to make real time split second decisions and adapt and adjust instantly to changes in new input information.

Hey, pretty much a dream work :)

RFID Operations Guide Live

Here it is: the first “BizTalk Server 2009 RFID Operations Guide”.

I haven’t read it yet, but according to Ewan Fairweather, the key sections of the guide are:

· Planning the Environment for BizTalk Server RFID: Explains the planning required for various components such as RFID devices, servers deployment, performance, HA, etc. to ensure that your BizTalk Server RFID infrastructure and applications become operationally ready.

· Operations Checklist: Provides a set of daily, weekly and monthly tasks that will empower the IT Pro to assess and evaluate the operational readiness state of a BizTalk Server RFID deployment.

· Managing Deployment: Covers best practices, key concepts, and procedures specific to BizTalk Server RFID and its dependencies for maintaining, managing and monitoring the various components.

· BizTalk RFID Useful links: Compiles all the BizTalk Server RFID relevant links that will be useful post deployment to production.

ASP.NET MVC V2 Preview 1

Damned, these goodies just doesn’t stop dropping, when will we ever get stable environments? Eh, eh, eh, just kidding, stopping is dying, and lately we are walking a desert of cool technological stuff. Thanks Scott.

Photo tips

Here are a set of photography learning sites I just got from my friend Pita:









Here’s a preciosity found on one of these:

Wednesday, July 29, 2009

Microsoft / Yahoo agreement

Yahoo! and Microsoft announced an agreement that will improve the Web search experience for users and advertisers, and deliver sustained innovation to the industry. In simple terms, Microsoft will now power Yahoo! search while Yahoo! will become the exclusive worldwide relationship sales force for both companies’ premium search advertisers.

Finally something well done. It isn’t a buyout, but an agreement of two companies with great cultural differences that strive to remain competitive on a world where Google rules. At last!

Here are the key terms of the agreement:

•The term of the agreement is 10 years;

•Microsoft will acquire an exclusive 10 year license to Yahoo!’s core search technologies, and Microsoft will have the ability to integrate Yahoo! search technologies into its existing web search platforms;

•Microsoft’s Bing will be the exclusive algorithmic search and paid search platform for Yahoo! sites. Yahoo! will continue to use its technology and data in other areas of its business such as enhancing display advertising technology.

•Yahoo! will become the exclusive worldwide relationship sales force for both companies’ premium search advertisers. Self-serve advertising for both companies will be fulfilled by Microsoft’s AdCenter platform, and prices for all search ads will continue to be set by AdCenter’s automated auction process.

•Each company will maintain its own separate display advertising business and sales force.

•Yahoo! will innovate and “own” the user experience on Yahoo! properties, including the user experience for search, even though it will be powered by Microsoft technology.

•Microsoft will compensate Yahoo! through a revenue sharing agreement on traffic generated on Yahoo!’s network of both owned and operated (O&O) and affiliate sites.

•Microsoft will pay traffic acquisition costs (TAC) to Yahoo! at an initial rate of 88% of search revenue generated on Yahoo!’s O&O sites during the first 5 years of the agreement.

•Yahoo! will continue to syndicate its existing search affiliate partnerships.

•Microsoft will guarantee Yahoo!’s O&O revenue per search (RPS) in each country for the first 18 months following initial implementation in that country.

•At full implementation (expected to occur within 24 months following regulatory approval), Yahoo! estimates, based on current levels of revenue and current operating expenses, that this agreement will provide a benefit to annual GAAP operating income of approximately $500 million and capital expenditure savings of approximately $200 million. Yahoo! also estimates that this agreement will provide a benefit to annual operating cash flow of approximately $275 million.

•The agreement protects consumer privacy by limiting the data shared between the companies to the minimum necessary to operate and improve the combined search platform, and restricts the use of search data shared between the companies. The agreement maintains the industry-leading privacy practices that each company follows today.

And finally:

The agreement does not cover each company’s web properties and products, email, instant messaging, display advertising, or any other aspect of the companies’ businesses. In those areas, the companies will continue to compete vigorously.

So bing’s market share will surely grow. At least for now :)

Tuesday, July 28, 2009

.NET 4 Beta 1 supports Software Transactional Memory

Software Transactional Memory (STM.NET) is a mechanism for efficient isolation of shared state.  The programmer demarcates a region of code as operating within a transaction that is “atomic” and “isolated” from other transacted code running concurrently.

Transactional memory is considered a promising technology by the academic community and is repeatedly brought up as a welcome technology for the upcoming wave of applications which scale on modern multi-core hardware. The goal is to be able to exploit concurrency by using components written by experts and consumed by application programmers who can then compose together these components using STM. Transactional memory provides an easy-to-use mechanism to do this safely.

STM is not a 1:1 lock replacement and provides more functionality than critical sections, reader-writer locks, and other traditional synchronization methods. This functionality has overhead; the hope is that the scalability, productivity, and deadlock freedom gained by this mechanism outweigh the degradation in serial performance.

This is an experimental release of the .NET Framework that allows C# programmers to try out this technology, specifically a particular implementation of STM. We are interested in your feedback on your experience using this programming model. Is it valuable and easy-to-use? Does it provide enough functionality? Are you willing to pay with serial performance losses to gain greater scalability? Our implementation is integrated with the framework and tools, it has been extended to provide coexistence with locks, interoperate with traditional transactional technologies, and safely work with existing code.


Getting Started

  1. Download and Install the .NET Framework enabled for Software Transactional Memory (STM.NET).
  2. Download and Install the samples, documentation, and configuration files necessary to use Visual Studio 2008 with the .NET Framework installed in the previous step.
  3. Read the STM Programming Guide to get started with STM.NET.

This are great news. The bad new is that the setup fails to install on my x64 machines. Arghh!!! I want to play with STM!….

Sunday, July 26, 2009

Taking photos on the water

Water, salt and sand are not the friends your reflex can get. Though I often take the camera with me to the beach, I ended up taking pictures from the beach, not from the water. Like the next one:


Even a shot like this can be hazardous for your camera. A while after this shot, a wave bigger than I expected got me while lying on the sand and the camera nearly took a dip.

Every now and then I jumped into the water, but only on low tide with calm waters, like this:


A hard case costs more than my camera, so I decided to get a cheaper bag case. The case choice itself wasn’t simple, as we can buy a cheap underwater camera for about the same price, but the performance and flexibility of an SLR is something I wasn’t willing to give up.

I chose Aquapac Waterproof SLR case (code 455), a 80£ (92.5€) bag. I wanted to buy an ewa-marine big enough to carry a flash, but it costs over 250€.

Aquapac didn’t guarantee that if could be fitted on either my Canon 10D and 50D, but I still ordered to check. Naturally I’m trying it on my old 10D, with an old Canon 28-105 USM, a lens I seldom use. And it fits. Not that it is easy to slip it in, you have to get the hand of it, but it fits. Here it is:


The operation is, well, clumsy. As expected. But I can still (slowly) use all of the buttons. I can even use the lens zoom.

I haven’t yet used it under water – well, to be honest I’ve dipped it underwater and took a shot, but the water was so rough that I couldn’t see anything. I have to try it out on a pool :)

Here are some shots above water:


And there is it, I can now get my camera to the beach, to the river, to canoeing, to sailing, and so on.

 IMG_2137 Let me finish with two important advises:

  • avoid contact with sunscreens: I wasn’t careful and here’s a shot of what happened
  • be sure to carry a bottle of water and some cleaning cloth – salty water is definitely not a good filter for your shots

Wednesday, July 22, 2009

I will get Windows 7 RTM on…

Oops, don’t know for sure… Let’s see:

  • ISV (Independent software vendor) and IHV (Independent hardware vendor) Partners: 6th August
  • IT Professionals with TechNet Subscriptions: 7th August
  • Volume License (VL) customer with an existing Software Assurance (SA): 7th August
  • Microsoft Partner Program Gold/Certified Members: 16th August

Uhmm, probably on August 7th if we receive our VL keys on the very same day. If everything else fails, on August 16th. This time I’ll probably wait for proper channel release :)

Tuesday, July 21, 2009

Microsoft Contributes Code to the Linux Kernel

Microsoft released 20,000 lines of device driver code to the Linux Community. GPL. Unbelievable!

The code, which includes three Linux device drivers, has been submitted to the Linux kernel community for inclusion in the Linux tree.

What’s the catch? No catch other than recognizing that Linux has it’s value on the server space, and recognizing that Hyper-V needs to support Linux if if wants to keep up with VMWare and Xen.

It’s a cool new world we are living :)

Friday, July 17, 2009

100Mbps to 1Gbps simple benchmark

For some strange reason we still have old 100Mbps switches on our development datacenter. Backing up 500GB of virtual machines can waste 12h.

We sometimes couple VM host machine backups so we can guarantee a fast emergency restore. On these machines we are now connecting directly the 2nd network adapters on crossover so we can get 1Gbps. And now we can do it in only 1 1/2 hour. That is 8 times faster. Cool.

Here’s the experience we are collecting:


100 Mbps

39 GB/h


1 Gbps

309 GB/h


the 300 GB/h is the RAID controller - it is consistent with the 90 MB/s read limit.


Wednesday, July 15, 2009

Error formatting an external 1TB drive on OSX

No much to say here. Bought an external 1TB drive and failed to format it on Mac OSX on other than FAT32. Tried to format it on a GParted live disk with the same results. The solution: dropped the partition using GParted and OSX Disk Utility finally managed to created it.

PS: didn’t try to drop the partition using OSX, it would probably work

Azure pricing

Not as cheap as I expected, but anyway here it is:


Windows Azure:

  • Compute = $0.12 / hour (editor’s note: by the pricing I assume this is CPU time. At least I hope…)
  • Storage = $0.15 / GB stored / month
  • Storage Transactions = $0.01 / 10K
  • Bandwidth = $0.10 in / $0.15 out / GB

SQL Azure:

  • Web Edition – Up to 1 GB relational database = $9.99
  • Business Edition – Up to 10 GB relational database = $99.99
  • Bandwidth = $0.10 in / $0.15 out / GB

.Net Services:

  • Messages = $0.15/100K message operations , including Service Bus messages and Access Control tokens
  • Bandwidth = $0.10 in / $0.15 out / GB

How Comsumption is Measured

Windows Azure

  • Compute time, measured in machine hours: Windows Azure compute hours are charged only for when your application is deployed. When developing and testing your application, developers will want to remove the compute instances that are not being used to minimize compute hour billing.
  • Storage, measured in GB: Storage is metered in units of average daily amount of data stored (in GB) over a monthly period. E.g. if a user uploaded 30GB of data and stored it on Windows Azure for a day, her monthly billed storage would be 1 GB. If the same user uploaded 30GB of data and stored it on Windows Azure for an entire billing period, her monthly billed storage would be 30GB. Storage is also metered in terms of storage transactions used to add, update, read and delete storage data. These are billed at a rate of $0.01 for 10,000 (10k) transaction requests
  • Bandwidth requirements (transmissions to and from the Azure datacenter), measured in GB: Bandwidth is charged based on the total amount of data going in and out of the Azure services via the internet in a given 30-day period. Bandwidth within a datacenter is free.
  • Transactions, measured as application requests

SQL Azure

Web Edition Relational Database includes:

  • Up to 1 GB of T-SQL based relational database
  • Self-managed DB, auto high availability and backup
  • Auto Scale with pay-as-you grow
  • Best suited for Web application, Departmental custom apps.

Business Edition DB includes:

  • Up to 10 GB of T-SQL based relational database
  • Self-managed DB, auto high availability and backup
  • Auto Scale, Pay-as- you grow
  • Additional features in the future like auto-partition, CLR, fanouts etc.
  • Best suited for ISVs packaged LOB apps, Department custom apps

.NET Services:

Messages (Includes Access Control, Orchestration, and Reliable Queuing for message): .NET Services allow developers to easily connect their cloud applications and databases with existing software assets and users. This connection between cloud and on-premises assets is facilitated by the exchange of messages. The consumption-based pricing model means that customers will pay only for the number of message operations that their applications use. The definition of a “message operation” includes Service Bus messages and Access Control tokens. Messages are charged to the customer in discrete blocks of 100,000 (“100k”) for each monthly billing period, meaning that

  • A customer who consumed 95,000 messages would be billed for 1x100k messages (plus the bandwidth used to send messages in or out).
  • A customer who uses 150,000 messages in a billing period would be charged for 2x100k messages (plus the bandwidth used to send messages in or out).
  • A customer who uses 20 million messages in a billing period would be charged for 200x100k messages (plus the bandwidth used to send messages in or out).

Service Level Agreements (SLA)

Windows Azure:

Windows Azure has separate SLA’s for compute and storage. For compute, we guarantee that when you deploy two or more role instances in different fault and upgrade domains your Internet facing roles will have external connectivity at least 99.95% of the time. Additionally, we will monitor all of your individual role instances and detect within two minutes when a role instance’s process is not running and initiate corrective action (we will publish by PDC the full details of our uptime promise for individual role instances). For storage, we guarantee that at least 99.9% of the time we will successfully process correctly formatted requests that we receive to add, update, read and delete data. We also guarantee that your storage accounts will have connectivity to our Internet gateway.

Service Level Agreements (SLA)

SQL Azure:

SQL Azure customers will have connectivity between the database and our Internet gateway. SQL Azure will maintain a “Monthly Availability” of 99.9% during a calendar month. “Monthly Availability Percentage” for a specific customer database is the ratio of the time the database was available to customer to the total time in a month. Time is measured in 5-minute intervals in a 30-day monthly cycle. Availability is always calculated for a full month. An interval is marked as unavailable if the customer’s attempts to connect to a database are rejected by the SQL Azure gateway.

Service Level Agreements (SLA)

.NET Services:

Uptime percentage commitments and SLA credits for .NET Services are equivalent to those specified above in the Windows Azure SLA. Due to inherent differences between the technologies, underlying SLA definitions and terms differ for .NET Services. Using the Service Bus module of .NET Services, customers will have connectivity between a customer’s service endpoint and our Internet gateway; when our service fails to establish a connection from the gateway to a customer's service endpoint, then the service is assumed to be unavailable. In addition, the service will process correctly formatted request for the handling of messages; when our service fails to process a request properly, then the service is assumed to be unavailable. SLA calculations will be based on an average over a 30-day monthly cycle, with 5-minute time intervals. Failures seen by a customer in the form of service unavailability will be counted for the purpose of availability calculations for that customer. Additional SLA definitions and terms will be detailed at a later date.

Monday, July 13, 2009

No Windows 7 RTM announced… yet…

Apart from a supposed RTM thrown at the torrents (a build 7600) no RTM has been announced nor released today. I’m betting that this 7600 is a bogus, at least as RTM, the early torrents releases were probably put there by Microsoft to assure free publicity and motivation for installation (for some strange reason called progress human kind doesn’t get excited with easy tasks. And Microsoft won’t ever upload an RTM, Microsoft no longer sponsors pirating their own software... It will get to the torrents, no doubt about it, but it will be first be delivered at least to MSDN subscribers.

Let’s wait a couple of days and see what happens. Is Microsoft managing the announcement cycle away from Silverlight 3?…


bink.nu sources seems to confirm the RTM as 7600.win7_rtm.090710-1945. That’s the torrents version. Could it be?….



According to windows blog:

We are close, but have not yet signed off on Windows 7. When we RTM you will most certainly hear it here. As we’ve said all along, we will RTM Windows 7 when it’s ready. As previously stated, we expect Windows 7 to RTM in the 2nd half of July.

And their advice:

I am using one of the so-called “leaked” builds of Windows 7, how will I know if it is the real deal?

As always, beware of what you download. There are many bogus copies of Windows 7 floating around the Internet. More often than not, they contain a rather nice malware payload. And don’t believe everything you read on the Internet. When Windows 7 hits RTM, it will be announced here. Until that happens, any builds you are likely to see on the web are either not the final bits or are laced with malicious code.

Ok, so I did download the leaked version though I firmly believe this is a bogus one, so what? :) Some of the better builds I’ve installed came were leaked versions :) I’l probably wait for the final version…

This will probably be one of the first site that announces the RTM, so let’s stay tuned.


Monday, July 06, 2009

One step further away from VMWare

Here’s a long time VMWare user that is about to dump it for good. The last straw: ESXi!

We’ve just installed an ESXi 4. Other than the over-simplicist setup (I couldn’t find how to set the partitions) and the lack of standard Unix services (ok, I can understand why, but man this is annoying), the last straw was the apparently network bandwidth capping.

VMWare doesn’t depend on open protocols to manage remote file systems, but on it’s own implementation. On this implementation I couldn’t go over 4MB/s, even using some alternative referenced faster tools. After some binging it looks like VMWare capped this on version 4 <update>Fixed broken link</update>. Incredible And since we are using free software we have to depend on standard tools and protocols to guarantee backup.

We’ll try NFS to see if we can get over this stupid limit. If it doesn’t work we’ll probably ditch ESXi. And now for the next options:

  • VMWare Server over Windows or Linux (our present solution)
    • pros: less memory greed than Hyper-V, porting is direct
    • cons: no hypervisor (not has fast as Hyper-V)
  • Hyper-V
    • pros: hypervisor (speed!)
    • cons: conversion needed

Let’s see how it goes.



Ok, now we have SSH and FTP on ESXi, and over FTP it seems like the capping has gone :)


Friday, July 03, 2009


Windows 7 and Windows Server 2008 R2 introduce DirectAccess, a solution that provides users with the same experience working remotely as they would have when working in the office. With DirectAccess, we can access corporate file shares, Web sites, and applications without connecting to a virtual private network, and above all, still routing other traffic directly from the internet.

Not that we already didn’t it, but this is transparent, simple to use and secure. Or it promises to be, I haven’t tried it yet…

Thursday, July 02, 2009

The Indians and weather bureau

The Indian Chief phoned the weather bureau and asked if the next winter would by harsh. The weather bureau predicted some rain so the Indians started collecting some wood for a rainy winter. As they approached winter this queries continued, the weather bureau forecasts got rainier and the Indians kept chopping more wood.

Finally the Indian Chief starts doubting about the increasingly worse winter forecasts and the weather man assures: “Have no doubt about it, we have been receiving consistent information that the Indians are cutting wood like there is no tomorrow!”

I’m working on a urban traffic and re-rerouting project – well, a prospect for now. This class of problems is a little tricky, you can imagine the level of decisions our algorithm as to cope with (just imagine if a re-routing alert was accepted by all drivers!…) These problems are particularly hard because a small choice can quickly re-arrange dramatically all of the network of information, quickly out-dating and probably condemning the choice itself… Oh well, so here’s why a partner on the meeting told me the joke I’m sharing :)

WSS: recovering an overwritten AllItems.aspx

Let’s start by stating the obvious: you should never assume that the software you’re using can’t be as dumb as that, it often can. That simple assumption itself is quite dumb. And that is how I’m feeling right now: dumb.

I was saving an excel worksheet into WSS and copied the full path from IE including AllItems.aspx! So what our friend WSS decided to do: it uploaded my excel into AllItems.aspx! Argh!!! I’ve lost access to my library!!!

The solution I’ve found was to create a new default view. The majority of binged results suggested an alternative approach: using Sharepoint Designer to copy AllItems.aspx from another library.


Here’s how:

Site Settings > Site Libraries and Lists > Customize "My Library"  > Views > Create View > Standard View > “AlternativeAllItems”     [X] Make this the default view


Development Catharsis :: Copyright 2006 Mário Romano