Though this too brings up the question of why Wily brought the point up at all if he knew Megaman could and would harm him, unless he was pointing out that it would make Megaman a criminal. Though it could be that Megaman himself has the law hardwired into him, while others don't. Regardless of taking the American slow speech, or the Japanese just raising the buster, both seem to indicate a powerful internal struggle.
It's more of guilt and what is truly right and wrong, actually. Rock raised his weapon against a defenseless human, humans who have their place as the creators of robots. Is it truly right for Rock to do so, even if this one human has caused countless tragedies? Rock being a hero, designed with a strong sense of justice, can not quickly come up with the answer to this question but there's a feeling of guilt that overcomes him as he hesitates.
Naturally, there will be legal consequences of Rock's actions, but this is Wily, would the law really be that harsh to him?
In a similar case, in the Power Battle, Wily brought up that Rock was as bad as him for destroying countless robots. Rock felt quite guilty and couldn't come up with an answer of his own; he needed Dr. Right and his friends to help him set his conscience straight.
Astroboy
As Tezuka didn't label his rules in a numerical order, whereas Right on the other hand, referred to something that was discribed as "the first rule of robotics". X1 certainly appears to literally quote the simplified form of Asimov's first law. Are both not somehow involved?
In regards to which rules Rockman follows, I think it's a bit of a lot of things actually. Rockman certainly takes inspiration from a lot of other series. AstroBoy being a dominant example as it has the same charming atmosphere as the classic series...
In regards to the original Astroboy, doing some googling to confirm that, I found this:
http://www.raintaxi.com/online/2002winter/tezuka.shtmlUnderpinning the Astro Boy stories is the "robot law," which states that the two main rules are "robots exists to make people happy" and "robots shall not injure or kill humans.""Robots exists to make people happy"
"Robots shall not injure or kill humans."
Using the type of numeric ordering that Asimov used, which of these rules is more important?
Certainly, Rockman bases itself in the way its robots interact on the charm presented by Astroboy. But as the series progressed, and the X-series came about. The X-series being considerably less light-hearted, inspiration from other more serious series finds its way into the continuity. Not to mention the setting of Rockman and RockmanX being dominantly set in America instead of Japan. One would be hard pressed to discount Asimov influencing the Rockman series. When it comes to all that; Tezuka's "must make people happy" concept certainly isn't going to cut it in the world of the X-series, the fact that robots can not harm humans becomes much more important than making people happy, and more serious rules then those are seemingly needed.
The last few pages of discussion here seem to have mostly been born from a misunderstanding of the term "Asimov's" being the whole "forced" aspect or simply the three basic concepts that each law exhibits.
In terms of just the written description, the Asimov's rules of robotics are in their simplified form:
-Robots must not harm humans.
-Robots must follow orders.
-Robots must not harm themselves.
Let's presume you simply heard these three concepts, but also let us presume you are unaware of these rules being hardwired into and forced onto robots; these are concepts that appear to be very much common sense for robotics to follow. But, you want to make a series like Astroboy, that has this magnificent freedom and charm in its robots. How are you to make your robots follow the above three basic principles without making your robots into emotionless automatons?
I don't think it can be denied that at least 2 of those core principles have been hinted at or even explicitly mentioned in the Rockman series. Right clearly mentions the first. And Blues hints at the existence of the second. Robots are not to harm humans, and robots should follow orders.
The third might not be hinted at, but I don't think it's something the Rockman series can easily violate. Afterall,. the intended charm of the series in general doesn't even allow such a heavy concept as robots deliberately committing suicide or causing harm to themselves, outside of robot heroics and such. Asimov covered such heroics by adding a disclaimer to each subsequent law that says "unless it contradicts the previous laws".
Rockman, which does not follow the principle of "forcing" these basic rules, naturally does not have any clear hierarchy in the rules like Asimov used, following Tezuka more; Rockman surely can decide to protect himself regardless of human order. After all, if he couldn't, he would simply be one of Asimov's mindless automatons instead of Tezuka's lighthearted robots.
Rockman seems to deal with these concepts by giving them a conscience. The conscience Rock was given, however, deals with many more complex matters than simply the three core concepts Asimov wrote down. Tezuka's "robots are to make people happy" concept certainly being one thing that's part of that conscience as well, in order to make him cover all aspects of right and wrong. After all, it was the kindhearted Dr. Thomas Right, who wrote his brain.
Put yourself in the position of Dr. Right. Certainly, from his point of view, robots are are to do good, robots are to follow these concepts with equal value depending on the situation at hand. Robots are to always do good and are not to make such unfair biased choices as putting humans ahead of robots. In order to make a utopia in which humans and robots can live together, robots are to be allowed to be equal to humans, but at the same time do good without a doubt. In his ideal of robots and humans, robots are exactly like humans but robots and humans are not in conflict, there is peace, tranquility and happiness. Every one is good, robots and humans alike.
Behind Dr. Right's ideals, however, is the whole scientific community. Which is certainly a much harsher world than the one of Dr. Right's ideal vision. Such notions as robots making people happy and robots being happy themselves, are of no concern to them. That robots are unable to harm humans is something that is of far greater importance to these people than the naive concept of human and robot "happiness". As long as Thomas Right makes reliable robots, they are satisfied. As long as they obey the "absolute" requirement that a robot can not harm a human, they are satisfied.
To the scientific community, to the world, the ideas of Dr. Thomas Right to place variable worth on such absolutely requirements as the inability to harm a human, is dangerous. Very very dangerous. Dr. Right, has always been on the border of the conflicting ideas of "making robots more like humans" and "making them the human's tools".
Regardless of Dr. Right's ideal, he himself realizes this as well. With RockmanX, "worrying", he's stepped over the boundary, he created something that is undeniable dangerous, something that is entirely to be feared. Something the world is not yet ready to accept.
No matter how it pains Dr. Right, he has to seal RockmanX away until the world has become more mature. Until the day arrives that humanity is ready to accept him. After all, for all the dangers X provides, for all he is feared. X embodies all of Dr. Right's ideals, and as his heart clearly tells him; RockmanX is the world's hope. RockmanX is his personal hope.
Returning to Tezuka and Asimov. I think somehow the inevitable conflict between the cold hard world of Asimov's robots, compared to the lighthearted and charming world of Tezuka's robot, is a core concept that lies at the root of the classic to X-series transition. After all, the difference between the Tezuka and Asimov approaches more than accurately describes the transition from the classic series to the darker and grittier X-series.