“Alert is bad” – really? [closed]
There’s this idea running around that “alert() is bad“.
Acknowledgements:
- Sure, we rarely want to use it in an actual UI design since there are better ways to communicate with users.
- For debugging,
console.log()
has much more value thanalert()
. - Certain situations (like use of
setTimeout
) run into problems whenalert()
gets in the way. - Actual debuggers handle pausing and resuming of execution much better than
alert()
, if that’s what a developer needs.
Questions:
- Is there a solid, logical reason to never use
alert()
? - Does the increased value of
console.log()
truly reduce the value ofalert()
so drastically that it goes from “useful in limited scenarios” to “bad”? - What do you say to someone who wants to use
alert()
in a brief test where logging is not setup and any side effects are irrelevant (think tutorials or quick prototypes)?
Is there a solid, logical reason to never use alert()?
alert is bad simply because it has no positive features and only negative features
- blocks the entire browser
- blocks the javascript thread
- only prints strings
- requires user interaction to continue (this means you can’t automate browser usage)
- is blocked by common popup blockers
- doesn’t work in non-browser environments like node.js (however console.log does work in node.js)
Does the increased value of console.log() truly reduce the value of alert() so drastically that it goes from “useful in limited scenarios” to “bad”?
Yes, although there are some exceptions
The only value alert has is as a quick hackish tool to debug legacy browser or as a tool to annoy users.
- No, it is just a language feature and there is no reason to never use
alert()
. alert()
works differently thanconsole.log()
, alsoconsole
is not always available, soconsole.log()
may reduce the value ofalert()
, but surely it can not always be replaced.- Explain how
console.log()
differs fromalert()
, especially thatalert()
must output the string, so it must first convert the value to string – it is very important if you want to check what value you have at some point and you pickedalert()
to fulfill that task. It also stops the execution of the script (this may be useful sometimes).
More links:
This is on par with the “never use tables” bit. It’s hyperbole targeted at reducing the number of instances for this extremely bad design. It’s considered bad design because it prevents additional browser actions (like the back button) and code execution (additional JavaScript and page rendering) until the user clicks the “ok” button. The “ok” label on that button cannot be changed and is inappropriate for a vast majority of use cases.
There are better ways to display error information, sanity check actions and confirmation dialogs, so use them.
There are “proper” use cases such as when you need to stop the normal page flow for some reason and it is dangerous (security) to continue to load pages. I can’t think of many specific examples, but they’re out there on the fringe. Somewhere.
you are interrupting the user- if he has many tabs open and you alert he suddenly is directed to your tab.. A pain in the ass for me as an user. that is the biggest concern for me..
having said that console.log does not work in all the browsers IE7 for one does not support the same.
well using alert is fine so long as it is clear why there is an alert.
also alert is not helpful to log JS objects you can only display strings.
For debugging purposes, alert
can be sometimes counter-productive, but it is immediate. The problem with using alert
in debugging scenarios is that it disrupts program flow. Whereas console.log
benefits from allowing program flow to continue.
What you have to bare in mind though, is that console.log
is not a standard property of window
, it exists because third party tools such as Firebug, Inspector and IE Developer Tools extend the window
object with the console
object instance. If you end up leaving console.log
statements in your code when you are not running something like Firebug, it can cause your scripts to fail.
Using alert
in live code seems to be frowned upon, but it is perfectly logical to use, if that is the mechanism by which you wish to alert your user. E.g., if they try and submit invalid data, it is perfectly valid to throw a alert("Please enter XXX");
. Of course, whether that provides the best user experience is another thing.
One other thing to consider, is that you can replace the alert
function if wanted to, e.g.:
var oldAlertFunc = window.alert;
window.alert = function(message) {
console.log(message);
};
Well, it seems that you answered your questions with your first list of reasons.
Is there a solid, logical reason to never use alert?
When debugging sites? Yes, never use alert()
.
Never, ever? Maybe too far. But, for most users alert()
brings about anger and frustration. I’ve never been to a site where I was glad they interrupted my visit with an alert dialog box.
Does the increased value of
console.log()
truly reduce the value ofalert()
so drastically that it goes from “useful in limited scenarios” to “bad”.
In the context of debugging applications I agree, alert()
is bad. I can supply much more information in a console.log()
, an entire JavaScript object that details a lot of information. There simply isn’t as much flexibility in messages displayed in an alert box.
What do you say to someone who wants to use alert() in a brief test where logging is not setup and any side effects are irrelevant (think tutorials or quick prototypes)?
It really isn’t that much harder to type console.log()
as compared to alert()
. You get much, much more useful information and you’re priming your coding “muscle memory” for good practices. If you use good practices all the time they will come naturally to you. If you use bad practices all the time they will come naturally to you too.
Ultimately though it depends on what you are doing. Are you debugging? Or are you trying to communicate to the user? These are two different things that require different approaches.