Wikipedia:Reference desk/Computing
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
November 4
Newsgroup / computer newbie type question
Having recently joined a couple of newsgroups (one is alt.usage.english) I have been happily posting. I type in MS Word, and then paste it into the newsgroup. What I would like to do is set up a blank MSWord document template so that it has exactly the proper margins, left and right, so I know that WISIWIG. That is, I know that my MSWord writing will appear just like that in the newsgroup, so I can make it look perfect. I don’t know where else to ask for help, as Google have discontinued their help desk forum. Any idea on how to do that?
Also while I’m here, I’m told that Google does not have the capacity for a sig file on newsgroup contributions. I don’t know why not, but I would like one on my posts. Any idea on how that could be done? Myles325a (talk) 04:23, 4 November 2010 (UTC)
- Unless there has been a mighty change in how usenet works, newsgroups do not have any of the formatting you are concerned about. It is plain text. Google performs formatting based on common trends in newsgroups. For example, if I type _this_, Google will make it look like this. Margins do not exist. You are just seeing the text in the width that your browser allows. But, ignoring that this is based on a misconception... you want a document template. Open Word. Get the page set up just like you like. Save it as a template. In the save window, you will see types of files to save. Choose template. Then, next time you want to work, open the template and it will have all your formatting in place. -- kainaw™ 12:05, 4 November 2010 (UTC)
- You can send encoded text, e.g. HTML. However it's considered very bad form in the vast majority of newsgroups (well I don't know any that allow it but I always say it's a bad idea to use absolutes). Nil Einne (talk) 20:08, 4 November 2010 (UTC)
Building a Storage Area Network Head - Operating System
Hello Everyone,
I am exploring the option of building my own SAN head. I'm aware that a SAN head consists of a motherboard, processor(s), memory, a RAID controller, and a load of HDDs; but I want to know what software a SAN head runs on. I assume that it must have an operating-system which operates the management and assignment of LUNs to multiple hosts, and I want to know what software provides these functions (and whether there are any open-source SAN operating-systems available)?
Thanks as always. Rocketshiporion♫ 05:43, 4 November 2010 (UTC)
- You can make a network accessible file system using Linux and NFS, FUSE/SSH, or SAMBA. Technically, that is a file server. Network-attached storage, storage area network, and file server differ in the "layer" that they provide. You'll have a hard time building a SAN out of commodity hardware and free software; and you probably won't reap the cost/performance benefits unless you are scaling to truly enormous enterprise sizes. As CPU costs decrease and performance becomes trivial, file servers are probably your best bet - let the remote system handle the file system details and provide the attached storage space at a high-level, as a network drive. Regarding specific software: you can run a NAS or file server using any Linux; BSD is popular; and FreeNAS is basically a pre-configured FreeBSD installation with fewer general-purpose features. The best choice ultimately depends on your needs. Nimur (talk) 06:12, 4 November 2010 (UTC)
- You can build an open source SAN using technologies such as Logical Volume Manager (Linux) (to create and manage the LUNs) and a block device exporter (eg vblade or an iSCSI-target). Googling 'linux san' will give you various guides and distributions on how to set such things up. Whether you would want a SAN (exporting block devices) or a NAS (exporting file systems) depends a lot on your requirements. Unilynx (talk) 06:48, 4 November 2010 (UTC)
The specific purpose is for four diskless computers to each be connected via iSCSI to its boot volume. AFAIK, it's not possible for computers to boot from a NAS or file server. The four diskless computers will also be clustered, and be connected to a shared LUN (which is also to be stored on the SAN). As for cost, all I really need (other than the operating-system) in terms of commodity hardware are; a motherboard, a processor, some RAM, a quad-port Ethernet card and a few SATA HDDs; which I estimate can be obtained for under $2,000. Rocketshiporion♫ 13:05, 4 November 2010 (UTC)
BitTorrent
I was reading the article on it, including the part where it mentions that computers on the network that have full versions of the file are called seeders. That there's a special name for computers that have full versions implies that there are other computers that have pieces which would seem to be useless to them. Is having a bunch of pieces which are of no use to you (in addition to whatever full files for which you are a seeder) just the cost of being a reputable member of a swarm? 20.137.18.50 (talk) 13:35, 4 November 2010 (UTC)
- A computer with only some pieces of the file can share those pieces with others, and vice versa. So even if there are no seeders at all, the various pieces across the swarm can account for the entire file and thus the entire file can still be downloaded. This is good because it means seeders aren't the only source for the file 82.44.55.25 (talk) 13:53, 4 November 2010 (UTC)
- A main benefit of bittorrent is that while you are downloading a file, you can share the parts of the file you already downloaded with other people who are downloading it. Often, more people are downloading a file than there are seeders. So, most people only have parts of the file. As a courtesy, it is common to keep connected after you download a file, becoming a seeder. You no longer need any parts of the file, but you share what you've got for a while just to help others. -- kainaw™ 13:56, 4 November 2010 (UTC)
- Nobody has files which are "useless" to them, because they are, presumably, interested in eventually downloading the entire file. So just because I have the last 25% of the file in question and not the first 25%, doesn't mean that the last 25% is useless to me (even if I can't "use" it for anything at this point), because presumably I'm hoping to get the entire file, and the last 25% puts me that much closer to that goal. Remember that people aren't just hosting because they are being generous — the entire point of a torrent is to distribute the file on all of the computers participating, including those who are participating in distributing it.
- Seeders are special because they have the entire file yet are still distributing — thus they are being somewhat altruistic, because at that point, there is no personal gain to distributing the file (because the only "gain" in distributing is that you get a copy of it yourself). They aren't required but it greatly helps make sure there are redundant copies of the entire file on the network, which speeds things up (because maybe otherwise the only fellow who has on part has a very slow internet connection, thus introducing a bottleneck until others get that part... without seeders, it is not uncommon to see torrents "stuck" at 99%, never quite able to find that last 1%). --Mr.98 (talk) 14:06, 4 November 2010 (UTC)
- I think there's a basic misunderstanding here, but I'm not sure where. When you join a swarm you have none of the file. While you're downloading you'll be accumulating pieces of the file. The pieces are not, on their own, useful to you because you want the entire file, but you can redistribute those pieces to people who don't have them yet. (Everyone downloads the pieces in a different order, so even if you've only downloaded one piece, you can still help people who are halfway finished.) Eventually, if you keep downloading, you'll have the entire, complete file. (You'll have all the pieces.) You are now a "seeder", and ideally you'll continue distributing pieces to people that need them, though that's not strictly necessary.
- Unless there's a glitch, at no point will you download a 'piece' that you don't personally need to complete a file that you personally are trying to download. APL (talk) 15:17, 4 November 2010 (UTC)
does exist any c syntax that is forbidden in c++?
t.i.a. --217.194.34.103 (talk) 14:19, 4 November 2010 (UTC)
int new = 0;
and many others. --Stephan Schulz (talk) 14:22, 4 November 2010 (UTC)- To clarify, C++ reserves some extra keywords. In C they would be valid as variable names or function names, but in C++ they are reserved for the use of the language. Here's a list of C++ keywords not reserved in C :
asm dynamic_cast namespace reinterpret_cast try bool explicit new static_cast typeid catch false operator template typename class friend private this using const_cast inline public throw virtual delete mutable protected true wchar_t
- APL (talk) 15:09, 4 November 2010 (UTC)
- And this is valid C89, but not C++
- APL (talk) 15:09, 4 November 2010 (UTC)
int test()
{
int i;
i=1 //**/1;
return i;
}
- The code requires // not to be a comment marker, which is true for C89/C90 and older. However, the code is formatted for C99, which will reject it.
- Also, in C,
- The return type can be unspecified (defaults to int), must be specified in C++
- The parameter-list can be 'void', must be empty if no params in C++ (which can mean K&R param passing in C)
- K&R param passing is allowed (but discouraged)
- The parameters to main can be what every you want - in C++ main must be on of
- int main()
- int main(int, char**)
- int main(int, char**, /* any other params */)
- In C void* can be implicitly cast to any pointer type; C++ needs an explicate cast. —Preceding unsigned comment added by Csmiller (talk • contribs) 15:27, 4 November 2010 (UTC)
- CS Miller (talk) 15:24, 4 November 2010 (UTC)
- See Constructs valid in C but not C++. --Sean 18:12, 4 November 2010 (UTC)
- The article linked to by Sean is a very, very good starting point. For more differences, see Annex C of the C++ standard (ISO/IEC 14882:2003). For example, . Regards,
int main(void) { return main(); /* valid (but useless) C, invalid C++ */ }
decltype
(talk) 05:20, 5 November 2010 (UTC)
hard drive
Can I leave my external hard drive on 24/7, or should I turn it off when not in use? My computers internal drive is on 24/7, is there a difference between internal and external ? —Preceding unsigned comment added by 91.193.69.210 (talk) 14:51, 4 November 2010 (UTC)
- Most computers will put a drive to sleep if it is inactive for a period of time, so leaving the drive powered up is not itself a problem. Any drive will wear out over time (so always make backups) but for the most part drives will outlast computers, unless you keep your computer a long time or make heavy, heavy use of the drive (constant read-write action). --Ludwigs2 15:25, 4 November 2010 (UTC)
- [citation needed] on the claim that "for the most part drives will outlast computers". Comet Tuttle (talk) 16:28, 4 November 2010 (UTC)
- I'd read this as "computers get replaced before hard drives fail", not as "usually other components fail before the hard drive". But yes, a source for either interpretation would be nice. --Stephan Schulz (talk) 16:32, 4 November 2010 (UTC)
- [citation needed] on the claim that "for the most part drives will outlast computers". Comet Tuttle (talk) 16:28, 4 November 2010 (UTC)
- Depending on how you look at it, the MTBF of a hard disk (and all the other components) is less than the average time a computer is in primary use thanks to the software lifecycle. Greenpeace quotes a figure of 2 years for the average PC lifespan in developed countries, but this is not cited. Regardless, it's obvious by the tonnage of computers that show up in the landfill and the number that fly off shelves to replace them that the used life isn't that long. I would be shocked if it were more than 5 years in the US; most HDD warranties are as long. --Jmeden2000 (talk) 17:56, 4 November 2010 (UTC)
- Again, please provide citations. This is a reference desk. Simply discussing warranties and the MTBF claims (which are notoriously exaggerated by the manufacturers) isn't evidence of anything. Comet Tuttle (talk) 18:48, 4 November 2010 (UTC)
- But ah, we are not writing a scholarly article we are trying to come up with a useful answer to the question at hand. No offense, but his question is no closer to being answered as a result of putting [citation needed] next to every response. --Jmeden2000 (talk) 13:40, 5 November 2010 (UTC)
- Again, please provide citations. This is a reference desk. Simply discussing warranties and the MTBF claims (which are notoriously exaggerated by the manufacturers) isn't evidence of anything. Comet Tuttle (talk) 18:48, 4 November 2010 (UTC)
- CT, relax. the point is that a hard disk is not going to unduly suffer by being powered on continuously, unless it is also subjected to extreme conditions (high levels of disk read/writes, excessive temperatures, etc). A hard disk is obviously more likely to fail than the computer itself (by virtue of moving parts), but hard disks are constantly increasing their lifespan through technological improvement, and the average consumer replaces his computer at regular intervals. The OP should not worry about it beyond the normal caution to maintain regular backups. --Ludwigs2 21:50, 4 November 2010 (UTC)
- "but hard disks are constantly increasing their lifespan" - well, that's arguable. We have a batch in a large recording system over here where 20% per year are failing...big-brand, heavy duty server class drives at that. Hard drives are optimized for capacity, speed, price, and reliability, and I would expect priorities to be roughly in that order. --Stephan Schulz (talk) 21:59, 4 November 2010 (UTC)
- CT, relax. the point is that a hard disk is not going to unduly suffer by being powered on continuously, unless it is also subjected to extreme conditions (high levels of disk read/writes, excessive temperatures, etc). A hard disk is obviously more likely to fail than the computer itself (by virtue of moving parts), but hard disks are constantly increasing their lifespan through technological improvement, and the average consumer replaces his computer at regular intervals. The OP should not worry about it beyond the normal caution to maintain regular backups. --Ludwigs2 21:50, 4 November 2010 (UTC)
- All of the computers that I've had fail on me (2 of them) have been failed hard drives (well, I had a CD drive fail, but I still used the computer for a bit). There have been many more computers that I've gotten rid of before anything fails, but I would say that a hard drive, with mechanical moving parts, is one of the most likely things to fail on a computer. Interestingly this site says that power supply issues are the number one way to fry your computer. I've yet to have a power supply problem. Buddy431 (talk) 01:22, 5 November 2010 (UTC)
- Here's a highly non-scientific poll of failing parts; storage drives have the plurality. Buddy431 (talk) 01:24, 5 November 2010 (UTC)
polynomial shift
I have the coefficients of a polynomial of one variable. I want the coefficients of the equivalent polynomial in terms of another variable (u=t+constant); in other words, the function that defines the same curve with a shifted origin. Before I reinvent the square wheel, is there a more efficient way than expanding and adding up the results? (Or better yet a numpy library function?) I'd search but I don't know the appropriate keyword. —Tamfang (talk) 17:11, 4 November 2010 (UTC)
- You've got a function
f(x)=a0+a1x+a2x2+a3x3+ etc etc
- and you want to know the b's in
f(x)=b0+b1(x-c)+b2(x-c)2+b3(x-c)3+ etc etc
- then Taylor series will work (in many cases, in particular for polynomials) and gives you bn's fairly trivially, since
bn=(1/n!) x dn[f(x)]/dxn evaluated at x=c
- Is that what you want - it's easier to implement (and quicker I think) than the expansion (especially when it's a long polynomial) , ask if you want pseudocode
- (apologies if that isn't what you are asking, I'm a bit sleepy). I've given you two functions that give the same value for a given x? (on second thoughts it may not be any better than the binomial expansion, maybe, maybe not) 94.72.205.11 (talk) 20:35, 4 November 2010 (UTC)
- Treating it as a Taylor series makes eminent sense; it may not be speediest but it's easy on the programmer, since I already have a "deriv" routine. Thanks. —Tamfang (talk) 19:45, 5 November 2010 (UTC)
Windows command prompt FOR loop
First, I am really sorry for clearing all previous information by accident. I don't know how it happened. Second, Does anybody have a solution for this problem? If I use the FOR loop under windows command prompt as in this example:
for /l %i in (1,1,10) do (@set x=%i
@echo %x% , %i)
The result is:
)
10 , 1
C:\>(
)
10 , 2
.
.
.
)
10 , 10
Why does x always take the value of 10 rather than i?--Email4mobile (talk) 17:54, 4 November 2010 (UTC)
- See this page for the solution. -- BenRG (talk) 04:46, 5 November 2010 (UTC)
- So all I had to do was just set the local variable at the beginning as in this line:
setlocal ENABLEDELAYEDEXPANSION
- Thank you very much.--Email4mobile (talk) 09:03, 5 November 2010 (UTC)
How has this image been generated?
How has this image been generated? I understand that the "crisp" version on the left is simply a screen grab, but I'm asking about the "blurry" version on the right. The "blur" is, after all, only present in the real, analog world, and the computer's memory is completely oblivious to it. So therefore a screen grab would produce an identical copy of the version on the left side, regardless of what the image was viewed on. Grabbing it via the real world instead of straight from the computer's memory, in other words by photographing it, would very likely not produce such a pixel-perfect copy with only the "blur" effect, because various real-world details would induce minute differences. Is the "blur" effect merely a simulation added afterwards, or what magic has been used? JIP | Talk 18:56, 4 November 2010 (UTC)
- I think you'd have to ask the person who made it to know for sure. I guess if someone took the left image, made the 3 colour layers into 3 separate layers (e.g. in gimp) and then moved one colour a pixel right, and another a pixel left) you'd get something like this. -- Finlay McWalter ☻ Talk 19:05, 4 November 2010 (UTC)
- No, it's a more complex effort. I verified it with xmag, and it's not as simple as shifting one RGB channel left and another right. I've asked the original creator, User:NewRisingSun, about it. Let's see if he responds. JIP | Talk 19:49, 4 November 2010 (UTC)
- The source code used to generate the image is on the image page. Does that help at all? --Tagishsimon (talk) 20:05, 4 November 2010 (UTC)
- No. The source code is only usable for direct screen grabs. It has no effect whatsoever on the image quality, because after all, the program itself is a mathematical abstraction, and as far as it is concerned, the image quality is always 100% perfect. JIP | Talk 20:27, 4 November 2010 (UTC)
- The source code used to generate the image is on the image page. Does that help at all? --Tagishsimon (talk) 20:05, 4 November 2010 (UTC)
- No, it's a more complex effort. I verified it with xmag, and it's not as simple as shifting one RGB channel left and another right. I've asked the original creator, User:NewRisingSun, about it. Let's see if he responds. JIP | Talk 19:49, 4 November 2010 (UTC)
- This is almost certainly a variant of what Finlay has suggested. Definitely generated by taking 3 slightly shifted (+/- 2 pixels) copies of the original image, possibly modifying them and superimposing them in some manner. So an attempt to digitally reverse engineer the imperfections of the analog world. 213.160.108.26 (talk) 23:34, 4 November 2010 (UTC)
- The image was probably generated with an emulator like DOSBox or MESS, or a program that uses similar algorithms to emulate CGA composite artifacts. Read on for a little bit about how such an algorithm might be created.
- To put the image in context for other responders here, it illustrates the color artifacts seen when the CGA adaptor is connected to an NTSC composite monitor, see Color Graphics Adapter - Special effects on composite color monitors. I myself was fascinated by the palette images shown a little lower in that section, and wondered how the patterns generated colors on the composite monitor.
- Edit: Just to clarify, in the image shown above, the screen on the left is what you'd see on an RGB monitor. The screen on the right is what you'd see on an NTSC monitor. The CGA adaptor sends different signals for an RGB montor than an NTSC monitor, so it's slightly incorrect to think of them as a "perfect" screen shot and a "blurry" real-world simulation of the same image. The CGA adaptor is using the same video memory pixels to generate both signals, but the RGB signal is able to represent the color of each individual pixel, whereas the NTSC signal must use a color wave that ends up being four pixels wide. —Preceding unsigned comment added by Bavi H (talk • contribs) 02:06, 5 November 2010 (UTC)
- Part of the answer requires you to know about how the NTSC composite signal works, especially the color burst and color encoding. I can't find a good description of the NTSC signal to link to yet. You can find information about how the NTSC signal works online, but you might have to read several different documents to get a good understanding of it.
- The other part of the answer is knowing what signals the CGA adaptor sends on its composite output. Go to CRTC emulation for MESS and scroll down to the section "Composite output" for details about that.
- Basically, for each CGA color, the CGA adaptor composite output generates a square wave shifted by a certain amount with respect to the color burst. In the 640×200 mode, the color wave is four pixels wide, so you can actually use groups of four black-or-white pixels to make your own color wave. If you shift the pattern by one pixel, you'll get a different hue. Here's an example I made with QBASIC running in DOSBox to help understand the order to the black and white patterns: cga-composite.png
- In the 320×200 modes the color wave is two pixels wide, because the pixels are twice as wide. By carefully calculating where the CGA composite output color wave for each color is sliced and combined, then decoding the wave as an NTSC color signal, you can predict the resulting color on the composite monitor screen.
- After studying this, I began to see how the colors are predicted, but didn't go far enough into the math to understand it all. (For example, I don't yet understand how a square wave is "seen" as a sinusoidal wave of NTSC color signal. Edit: Or how partial or irregular color waves become color fringes like those in the text-mode image above.)
- While researching this I also found Colour Graphics Adapter Notes - Color Composite Mode which has a zip file with images captured from an actual CGA adaptor (captured using a TV card with an NTSC composite video input). It includes image captures of the same palettes simulated in the CGA article above. In also includes image captures of Flight Simulator, which is interesting to compare to the Flight Simulator images on the MESS video emulation page linked above. --Bavi H (talk) 01:18, 5 November 2010 (UTC)
who's got money? How do I find an investor with vision?
I want to find an investor with vision, so that if I explain to them in a few words what I would like their money for, they will see that (or whether) it works. I don't want to waste my breath on people who wouldn't understand anyway! How do I find these people? If anyone here knows, they can also leave some contact information and I can ask them personally. Thank you! 84.153.205.142 (talk) 20:42, 4 November 2010 (UTC)
- It depends entirely on the quantity of money you seek and what you plan to do with it. You can start by investigating bank loans and credit card advances. These organizations will happily lend you large sums of money, at market interest-rates, for you to use for almost any purpose. Nimur (talk) 21:47, 4 November 2010 (UTC)
- Unfortunately, your request is unlikely to be handed to you like this, because human communication is more difficult than anyone thinks, and everyone has their own opinions about things like risk and how likely your idea is to succeed. Business owners' ability to raise money is a core requirement of being a business owner, for most businesses; and usually they have to pitch their idea and plan many, many times before an investor says "yes". Our article Angel investor has some links in the References section that may help you. Comet Tuttle (talk) 22:24, 4 November 2010 (UTC)
- With the exception of people who already have a track record of building successful businesses, most inventors and entrepreneurs have great difficulty getting an angel investor even to speak to them. If they do, they mostly want to see a working prototype or a business that's generating revenue (Dragon's Den, for all its faults, isn't a bad indication of what angel investors are looking for). Tony Fernandes, who started Air Asia after seeing an Easyjet ad, mortgaged his house to pay for it. No-one was interested in James Dyson's ballbarrow, despite his having many working prototypes, so he mortgaged his house too. It took Ron Hickman, inventor of the Black and Decker Workmate years, and apparently about 100 prototypes, before he persuaded someone to market it; and Hickman had an impeccable record of design and engineering management, as the chief engineer of Lotus Cars he had already designed the Lotus Elan and Lotus Europa. Paul Graham (whose investment company Y Combinator is roughly an angel investor) has a bunch of what he/they look for when investing in new enterprises here. Several VC books I've read come to roughly the same conclusions: they invest in people (that is, people who have a proven track record of making stuff and getting things done) and in working product; some are pretty blunt in thinking that if you're still working at BigCo and haven't quit to work on the project yourself, to your own obvious cost and risk, then you don't believe enough in the thing, so why would they. -- Finlay McWalter ☻ Talk 23:09, 4 November 2010 (UTC)
- Nice overview, Finlay McWalter! Almost the first question any investor will ask (or research) is "What do you have in it?" And the answer should include great amounts (by your personal standard) of time, money and experience. Bielle (talk) 23:21, 4 November 2010 (UTC)
- Also as a minor correction but as mentioned in our articles Tony Fernandes didn't start Air Asia. He bought there then failing airline for a nominal sum and turned it around into an extremely successful budget airline. Nil Einne (talk) 09:20, 8 November 2010 (UTC)
- The people I know who can drum up quick cash for projects use angel investors (as opposed to venture capitalists — the difference is the amount of money and control, angels being less in both, thus a bit easier to work with, they say). There are lots of sites that come up if you google "finding angel investors"; I've no experience in it (other than chats with friends who have made good on such things), so I can't tell what's good advice or not. I would note that exemplars (like those Findlay names) are not necessarily "normal" models — they are notable because they are rare cases for one reason or another. --Mr.98 (talk) 00:55, 5 November 2010 (UTC)
- No-one (except perhaps a close relative) is going to give you wads of money in return for a "few words". You need to have a well-researched business plan. 92.29.112.206 (talk) 19:48, 5 November 2010 (UTC)
- To add a quick note here and a twist on how difficult it is to get investment... and this is from personal experience. The people I went to with my partner didn't want to give us the money because they were concerned if either of us were hit by a truck that the business would not have a driver and would suffer. So make you go with a solid business plan as well as a backup plan, even for yourself, because the investor wants to avert risk too. Sandman30s (talk) 07:11, 9 November 2010 (UTC)
November 5
Problem after installing new video driver
Hi Reference Desk, I recently installed the ATI Catalyst 10.10 drivers for my ATI Mobility Radeon HD 5450 on Win7 64 bit. Now, it seems to be limiting the number of display colours, and when things with colour gradients are visible, for example the background of Microsoft Word, it looks like a compressed JPEG screenshot and there are very visible steps between the colours. I've recalibrated the display, reinstalled the driver, reset to factory settings, fiddled with Windows colour settings, all to no avail. I was wondering if this was a known issue, and/or if anyone had a clue how to fix it?
Thanks 110.175.208.144 (talk) 06:30, 5 November 2010 (UTC)
- At a rough guess, you might have been reset to basic colour. Press F1 for help, type Aero and open the Aero troubleshooter and follow the prompts to get Aero back... this might also fix your colour problems and get the 24/32 bit colour gradients back. The troubleshooters in Win7 are surprisingly good and can make some low-level changes when they have to. Worst case scenario - you can go into device driver and rollback the drivers. Sandman30s (talk) 09:12, 5 November 2010 (UTC)
- That didn't help, but, knowing that the Aero troubleshooter did nothing, eliminating the possibility of a DWM problem, I thought about what other things manage the visuals of the computer, and I thought the Themes service. I restarted that... and voila! Colours :) But now, I don't know why I had to manually restart the themes service, and the problem did not get fixed an a reboot previously :/ Thanks for your help! 110.175.208.144 (talk) 23:52, 5 November 2010 (UTC)
Automatic form filling application required
Job applications and other bureaucratic documents take too long to fill in neatly. Is there any application (preferably free, with source code in VB6 of VC++6) that I could either use directly or modify to (1) recognise the various rows, columns and common questions that need filling in, by both text and graphics recognition, with a manual mail merge type option if this fails, and (2) using a database, fill in the form in all the right places. Unlike mail merge in MS Word, it would have to fill in lots of separate sections instead of just one (the address) and of course the size would be standard A4-I can't get this size using MS Word, or at least my version, which is a few years old. —Preceding unsigned comment added by 80.1.80.5 (talk) 13:19, 5 November 2010 (UTC)
- No. The field labels in forms are often ambiguous. So, a computer will need to understand the purpose of the form to attempt to understand the meaning of the field label. Since computers cannot think, they cannot fill out forms. At best, a computer can assume "Name" means your full name and "DOB" means your date of birth. But, it wouldn't understand what to do if "Name" was the name of your dog or "DOB" is the date on the back of your passport. In the end, you (the human) must read every label and decide what to put in every field. -- kainaw™ 14:22, 5 November 2010 (UTC)
- Yes, with caveats. See http://www.laserapp.com.
- DaHorsesMouth (talk) 22:25, 5 November 2010 (UTC)
"number of cores" on 15" Macbook Pros?
Hi,
It is not clear to me, there are three configurations of Macbook Pro:
1) 2.4 Ghz i5 2) 2.53 Ghz i5 3) 2.66 Ghz i7 (with 2.28 Ghz)
What is the real performance difference? Are the first two both dual-core? Only the third one says "which features two processor cores on a single chip"??
Plus, as an added point of confusion, i7 has hyperthreading enabled, doesn't it? So, is it the case that option 1 and 2 are two cores shown to the OS as such whereas option 3 is two cores shown as four cores to the OS?
Or, is it exactly half of what I just said? Thanks! 84.153.205.142 (talk) 15:01, 5 November 2010 (UTC)
- based on the Intel page the i5 processors are either 2 or 4 core processors (depending on model - the Apple specs are not precise, though the 'features' page at Apple says dual core).
- cores are cores, they are not 'shown to the OS'. software needs to be written to take advantage of multiple cores, but for those apps that are you will see moderate increases in performance with the higher-end chips. This will be noticeable in casual use (apps opening slightly faster, long processes finishing slightly sooner), very noticeable in processor intensive tasks, and may increase the practical longevity of the machine itself (in the sense that future improvements in hardware, and the consequent revisions to software, won't leave the machine in the dust quite as soon). --Ludwigs2 15:35, 5 November 2010 (UTC)
- Wandering around the apple website confirms that all three chips are dual core. However this http://www.macworld.com/article/150589/2010/04/corei5i7_mbp.html gives the chip part no.s : i5 520M , i5 540M , and i7 620M , assuming that wasn't speculative and is true then all three have two cores, with hyperthreading, meaning a total of 4
coresthreads (or "4 virtual cores"). Finding out more info on these chips is trivial - just use search and the first result probably takes you to the intel page eg http://ark.intel.com/Product.aspx?id=47341 . The article MacBook Pro has the same info. - The second i5 is (as far as I can tell) just a faster clock. The differences between the i7 and i5 include a larger cache in the i7, but I wouldn't be suprised if there are other architectural differences. actually looking at generic benchmarks between these, seems to suggest that the i7 part is no different from an i5 with bigger cache, and higher clock, but that's speculation.94.72.205.11 (talk) 16:36, 5 November 2010 (UTC)
- probably not my place to say but looking at UK prices [1] it really looks like the additional costs of the better processored 15"s is way way way beyond either the base processor price different, and on the borderline or above of what is worth paying for. - the base model is easily good enough for 90+% of people. You can easily find 'real world' comparisons searching for "apple macbook pro benchmarks i5 i7" .94.72.205.11 (talk) 16:50, 5 November 2010 (UTC)
- Thank you: when you say that "software needs to be [specially] written to take advantage of multiple cores", are you just talking about hyperthreading, meaning that if I were just running a SINGLE processor-intensive application, it needs to be written in that way? Or, are you talking about something more general? Because don't two concurrently running different applications AUTOMATICALLY get put on their own core by the OS? In that sense, my reasoning is informed by the "dual-processor" behavior I had learned of a few years ago. In fact, isn't a dual-core, logically, just 2 processors? Or, is there a substantial difference as seen by applications and the OS between a dual-core processor, and two processors each of which is exactly the same as a single core? (I don't mean difference in whether they access as much separate level-1 and level-2 cache, I mean as seen by the OS and applications). If there is a substantial difference, what is that difference?
- I guess my knowledge is not really up to date and what I would really like to know is the difference between multiple CPU's (dual and quad CPU machines) of yesteryear power desktops, and multiple cores of today? Thank you! 84.153.205.142 (talk) 16:48, 5 November 2010 (UTC)
- (replying to question for Ludwigs) There hasn't been any change of definition (one possibly source of confusion is that sometimes a processor can have two separate physical chips within it, or one chip with two processors on it - both are as far as end results are concerned - the same...)
- As per multiple processes on multiple cores or threads - yes you are right - the only time there isn't any advantage is when you run a single (non threaded) program on a multicore machine. (but there are still a lot of examples of this)
- OS's can handle multiple threaded processors in just the same way they can handle multiple core processors - ie an OS will act like it's got 4 processors on a 2 core hyperthreaded machine , no further interaction required.94.72.205.11 (talk) 16:54, 5 November 2010 (UTC)
- If I understand it correctly, the dual-core advantage is that a multi-threaded app that's designed to take advantage of it can toss different threads onto different cores, making the handling of some processor-intensive tasks more efficient. Apps need to be designed for it because there are some technical issues involved in choosing which core to send a thread to and how to handle data that's being processed on different cores. Basically it's the difference between a office with one photocopier and and an office with two photocopiers - you can get a lot of advantages from sending different jobs to each photocopier if you plan it out, but the 'old guy' in the office is just going to chug away on one photocopier mindlessly. most apps made in the last few years support it - you're only going to lose the performance advantage if you have (say) an old version of some high-powered app that you're continuing to use to save buying an upgrade. I don't know enough about hyper-threading to know whether that also requires specially-coded apps or whether it's transparent on the app level. --Ludwigs2 17:12, 5 November 2010 (UTC)
- Key term here is Processor affinity which mentions the type of problem you describe. (or the analogy where we have 4 photocopiers, two in two rooms .. and I prefer to use the two in the same room to prevent walking up and down the stairs.that's an analogy of a dual core hyperthreading processor - total 4 threads) Programming tools such as OpenMP#Thread_affinity can set it.. , whether OS's can detect and set thread affinity without being told is something I don't know. 94.72.205.11 (talk) 17:22, 5 November 2010 (UTC)
- If I understand it correctly, the dual-core advantage is that a multi-threaded app that's designed to take advantage of it can toss different threads onto different cores, making the handling of some processor-intensive tasks more efficient. Apps need to be designed for it because there are some technical issues involved in choosing which core to send a thread to and how to handle data that's being processed on different cores. Basically it's the difference between a office with one photocopier and and an office with two photocopiers - you can get a lot of advantages from sending different jobs to each photocopier if you plan it out, but the 'old guy' in the office is just going to chug away on one photocopier mindlessly. most apps made in the last few years support it - you're only going to lose the performance advantage if you have (say) an old version of some high-powered app that you're continuing to use to save buying an upgrade. I don't know enough about hyper-threading to know whether that also requires specially-coded apps or whether it's transparent on the app level. --Ludwigs2 17:12, 5 November 2010 (UTC)
- (reply to OP) As an example of the difference between yesteryear and today - a old quad core mac eg [2] used 2 dual core chips, whereas a modern quad core mac has a single chip with 4 cores on it. Ignoring that they've changed from IBM's POWER chip family to intel's x86/64 family, they only difference I can think of is that todays multicore chips (4 or more) have L3 cache, whereas the old ones tended not to. Obviously things have got faster, and the chips improved, but there isn't anything I can think of that represents a major break of progression from one to the other. (probably missing something obvious).94.72.205.11 (talk) 17:45, 5 November 2010 (UTC)
- There's some confusion about what it means for an operating system to "expose" a core. In a modern multicore system, a multicore hardware may or may not be exposed by the operating system. In other words, different operating systems (and hardware) have different "contracts" between multi-threaded programs and the hardware that will execute the multiple threads. If you create a new kernel thread (on a POSIX system), or a new Process (on Windows), the operating system must implement the necessary code to task a particular process to a particular core (otherwise, there is no performance gain by threading - all threads execute sequentially on one core). When an operating system "exposes" a core, it means that a programmer is able to guarantee a particular mapping between software processes and hardware processors (or at the very least, receive an assurance by the OS that the scheduling and delegation to a CPU will be managed by the system thread API).
- An operating system might be using multiple CPUs, even if it doesn't show that implementation to the programmer/user . Or, it might be showing cores as software abstractions, even though they do not exist. The number of "exposed cores" and the number of "actual cores" are not explicitly required to be equal. This detail depends entirely on the OS' kernel. See models of threading for more details.
- Modern programming languages, such as Java or C#, use hybrid thread models - meaning that the system library will decide at runtime how the kernel should schedule multiple threads. This guarantees that an optimal execution time can be delivered - especially if the other CPU cores are occupied. It invalidates the sort of simplistic multi-core assumptions that many programmers make (i.e., "my system has 4 cores, so I will write exactly 4 threads to achieve 100% utilization") - and replaces this with a dynamic scheduler that knows about current system usage, cache coherency between cores, and so on. Nimur (talk) 18:28, 5 November 2010 (UTC)
display size
Sorry folks. I know I asked this question a while back and got a great answer. problem is: I can't find the answer. I tried searching the archives but no luck. So, I will ask the question again (and save the answer!).
I am using Vista on an LCD monitor. When I go to the net, the size is 100% but this is too small. I set it at 125% but can't get the settings to stay there and have to adjust them each time. Can someone help me (again)? 99.250.117.26 (talk) 15:54, 5 November 2010 (UTC)
- It's Wikipedia:Reference_desk/Archives/Computing/2010 May 5#125% screen size.—Emil J. 16:00, 5 November 2010 (UTC)
Hmmm. That answer came in just under two minutes . . . Wikipedia is getting slow! lol. Thanks a lot. 99.250.117.26 (talk)
List associated values in MS Access
I have an access database. There are two tables in this database. The primary key in the first may be associated with multiple primary keys in the second. I would like to find a way to list in the first table the primary keys from table 2 associated with a primary key from table 1. Is this even possible? 138.192.58.227 (talk) 17:39, 5 November 2010 (UTC)
- The usual way to do what you are asking for (if I understand it correctly) is to have a third table that sits between those two tables and maintains the associations (e.g a Junction table, among its many names). It's a lot easier than trying to put that information into the first table, and the associations can be viewed with clever SQL queries. --Mr.98 (talk) 17:57, 5 November 2010 (UTC)
Help
I made the mistake of leaving my crippled tower connected to the internet, and the damn thing auto-updated last night. The trouble is that after auto-updating, the tower automatically rebooted; however the hard drive in my home tower is on its last legs and now the machine will not reboot, every time I clear the windows XP screen I get taken to a blue screen announcing a boot up error and telling me the system has been shut down. There is precious material on the hard drive that I desperately want to put on an external hardrive before the tower goes down permanently, so I am asking if there is any way at all to get the machine back up and running one last time so I can salvage what I need from it. TomStar81 (Talk) 19:30, 5 November 2010 (UTC)
- Consider placing the bad hard-drive in another system (that boots off a good hard-drive); or booting from a live CD. Nimur (talk) 19:48, 5 November 2010 (UTC)
- (after e/c)
- Two ways come to mind.
- One is a boot disk. You can make a disk that will boot the computer off the CDrom drive. You'll only get a "dos" prompt, but that should be enough to copy files. (You could also make a linux boot disk easy enough, if you're comfortable with Linux.)
- Another way is to take the drive out, and put it into a USB drive enclosure. This will turn it into an external drive. Plug both drives into some other computer and copy them that way.
- However, if the files you're hoping to retrieve are corrupted, you're going to have difficulties in either case. There are professionals that can retrieve almost anything but they're quite pricey. APL (talk) 19:49, 5 November 2010 (UTC)
- Actually, far from the old fashioned boot disks I was imagining, it looks like some Linux LiveCDs can give you a fully usable, graphical user interface. Might be the easist way to go.
- Try (on some other, working, computer) to make yourself a live CD of a nice and user-friendly version of Linux (ubuntu for example) and copy the files that way. "Learning Linux" can be intimidating, but you don't have to learn anything to drag and drop some files from one drive to another. APL (talk) 19:55, 5 November 2010 (UTC)
- Indeed. The Live CD is the "boot disk" of the new millennium - it provides as many features as a full-blown graphical operating system. It should be fairly easy to operate - simply create the disc, boot from it, and copy your hard-disk to a safe location (like a USB drive or a network drive). Here is the official Ubuntu distribution - it is free to download and use. Nimur (talk) 20:01, 5 November 2010 (UTC)
Monitor as TV
I'm about to move in to a small house in the UK. I will purchase either a laptop or desktop PC. I also want to watch television.
I noticed that quite large-screen monitors have dropped in price, and read a review of an example product in 'PC-Pro' magazine - a 27 inch monitor for 200 pounds Not wishing to advertize it here, but it was their 'best buy', and it is this one
I'll be using UK Freeview TV, and will probably buy a Freeview+ box to act as a receiver and recorder.
So - one question is, how to connect it up so that I could watch TV on it. I don't want to use an in-computer TV card, because I'd want to keep the PC/laptop free for other things, and also because I've found TV-cards to be somewhat unstable.
Basically, I want to watch TV on a reasonable-sized screen, and sometimes use the big screen as a computer screen.
It seems that these monitors mostly make reasonable TVs - is that correct? Whereas TVs are often poor monitors.
-Would this type of monitor make a reasonable TV? -How would it compare to a similar-price actual TV? -How can I use it as a TV without needing the computer switched on (ie how to connect it to a freeview receiver box)? -Is this a reasonably sensible approach? —Preceding unsigned comment added by 92.41.19.146 (talk) 21:40, 5 November 2010 (UTC)
- You can get TV/monitors with integrated freeview, however for £200 the size would be about 23" , so not as big. You definitely get more screen for £200 if you just buy a monitor.
- However the monitor only has DVI and VGA inputs, which means it will not work with a standard freeview box SCART, however it would work with a Freview HD box with HDMI output (connect via an adaptor to DVI, it has HDCP so will work with an HDMI adaptor)
- The monitor is likely to have an absolutely fine display. (I use mine to watch stuff of freview in standard definition - it's fine) Old TV's made terrible monitors (too low resolution), modern Hi-Def TVs actually make fine monitors.
- The only other issue is that monitors typical have no speakers, or very poor sound - so you can expect to need a sound system - that could be an additional expense. (I'd expect to be able to get something suitable to make sound to TV standard for £50, but more if you want 'cinema sound'). Make sure the freeview box has the right sort of audio out you can use.
- The big issue here is that you'll need a Freeview HD box, which adds a lot to the price (~£80+ currently, probably soon cheaper as it's relatively new).
- It appears to be a better deal than the comparative standard price, however if you check large shops you can get TV's which will work as monitors eg random pick http://direct.asda.com/LG-32%22-LD450-LCD-TV---Full-1080P-HD---Digital/000500571,default,pd.html 32" at under £300 - it's a little bigger, and will work with any input. If you compare the additional costs of the monitor route it might seem attractive.. (note it doesn't have freeview HD, just freeview though). Generally there is usually a sub £300 30"+ hidef TV on special offer at one of the large supermarkets..(ie these offers are common) 94.72.205.11 (talk) 22:46, 5 November 2010 (UTC)
- Thanks; interesting comments and info - especially re. HD Freeview. As I plan to buy a freeview recorder anyway, the HD version is not much extra cost, and that sounds a reasonable solution.
- If anyone has actual experience with a monitor of this kind of size, I wonder if 1920 x 1080 starts to look like far too low a resolution for using as a 'regular' PC desktop when it gets up to the 27-30 inch sizes?
- The sound isn't a problem, by the way - I have a decent PC-speaker system that I'd use (altec lansing with a sub-woofer) which is gonna be way better than any built-in stuff.
- The 200-pound monitor, plus an HD freeview box w/ HDMI out, is sounding like quite a good option so far. —Preceding unsigned comment added by 92.41.19.146 (talk) 23:20, 5 November 2010 (UTC)
- Bit of maths - because it's a wide screen monitor the 27" converts into ~13" screen height for 1080 pixels. A bog standard 1024 high screen is ~11" high - so the pixels are only 13/11 times bigger (or 18%) noticeable but probably no big deal.
- Also equals about 80 dots per inch if my maths is correct.. better article is Pixel density 94.72.205.11 (talk) 23:59, 5 November 2010 (UTC)
C or C++
Hello there, I want to learn programming language. One of my friends told me to start with C. But somehow I started with C++. What's the difference between C and C++? I don't have any programming experience before. What I want to do is, make different kind of softwares. So which one I should choose?--180.234.26.169 (talk) 22:31, 5 November 2010 (UTC)
- The primary difference between C and C++ is that C++ allows for object-oriented programming. You can write C++ programs without objects. You can fake objects in C with structs and function pointers. But, the main reason to choose C++ over C is the ease of object-oriented programming. -- kainaw™ 22:48, 5 November 2010 (UTC)
- "Software" is very broad. What kind of software do you want to write?
- C++ was one of my first programming languages, and I wish it weren't; it is too big and complicated. In particular, C++ requires you to think about things that aren't important unless you are really concerned about speed. C has some of the same problems, but at least it's simple, so it's not a bad choice. I think Python and Scheme (in particular, Racket; the Scheme-derived language my school uses) are excellent choices for learning to program. Some people are put off by the fact that these languages are not as popular as, say, Java. But (1) if you work on your own, you should choose the best tool, not the one that everyone else chooses, and (2) learning with a language designed for elegance rather than industry make you a better programmer in any language. Paul (Stansifer) 02:46, 6 November 2010 (UTC)
- I agree with everything except your last statement. Learning how the computer works at a very low level makes you a better programmer. Understanding exactly how using a floating-point operation instead of an integer operation will affect your program is important. Understanding what may happen when you try to compare two floating-point numbers is important. Understanding how the stack is affected when you use recursion - especially unnecessary tail-end recursion - is important. Understanding how the memory cache is being used with loops is important. You can use an "elegant" language that makes guesses at what is optimal, but you are left hoping that the programming language is making good decisions. More often than not, the high-level languages make poor decisions and lead to slower execution and a waste of resources. Personally, I teach PHP first, then Java (since I don't like the implementation of objects in PHP), then C++. I don't teach C because anyone who knows C++ should be capable of learning C quickly. -- kainaw™ 03:05, 6 November 2010 (UTC)
- All of those things can be important, but performance only matters some of the time. Some, even most, projects will succeed just fine if they run 100 times more slowly than they could, so programmers shouldn't worry about wasting cycles. (Knowing enough about algorithms to get optimal big-O is usually more worthwhile.) But writing well-designed programs always requires thought; novices should start solving that problem, and worry about performance when they have to. Paul (Stansifer) 02:41, 7 November 2010 (UTC)
- Per Program optimization#Quotes, don't bother optimising unless and until it's really time to do so.
- With regards to the original question, I'm partial to a "C first" approach. Learn C, because it teaches you important things that many or most other modern languages neglect, such as dealing with pointers and memory management. Only when you have a decent grasp of C do I recommend learning C++. C++ has some niceties that, when learnt first, can leave you confused or frustrated when starting to learn a language without those features. It can generally be said that almost* any language has some advantages over other ones, and C and C++ both have their advantages over one another. I find C to be a good choice for single-purpose, fast programs, where objects are not required. C++ has some weight over C when it comes to large, multi-purpose programs, since the object-oriented aspect, and the added "sugar" of not having to deal with a lot of the lower-level bookkeeping such as pointer and memory management, allow you to focus more on the goal than on the design. On the other hand, it can be argued that it's much easier to become sloppy with C++, which is another good reason to get into a C programmers' habit of cleaning up resources et al.
- *I say "almost" here, because there are some languages out there that are more disgusting than the idea of blobfish mating. --Link (t•c•m) 09:24, 7 November 2010 (UTC)
- "C is to C++ as lung is to lung cancer" ;-). Seriously, C is a very good language for its niche. It's rather small, and rather consistent. It only has a small number of warts, and most of them turn out to be rather well-considered features on closer inspection. As a result, C is fairly easy to learn. It also exposes much of the underlying machine, so its pedagogically useful if you want people to be aware about how computers work. Everybody who knows assembler and C also has a fairly good idea of how nearly all C features are mapped into assembler. C++, on the other hand, is a very large, very rich, very overladen language. I'd be surprised to find anybody who actually "knows C++" in a full sense. C++ is rather ugly - it has come to fame because it allowed people to reuse C knowledge and C infrastructure (Compilers, linkers, and most of the tool chain) while supporting objects and classes (in fact, early on an intermediate form was called C with classes). Because of that, it took off and is now widely used, albeit still but-ugly. The only reason to learn C++ is to expect to be required to use it for some project. Java (programming language) is a better language, as is C Sharp (programming language), and if you go into exotics, Smalltalk and Common Lisp Object System are both cleaner and prettier (although one might argue that Scheme is to Common Lisp as C is to C++ ;-). --Stephan Schulz (talk) 10:03, 7 November 2010 (UTC)
- Java, as a language, is indeed quite nice, although various Java Virtual Machines (notably HotSpot, IcedTea and Blackdown) have been known to make me want to remodel my own face using an angle grinder. I don't really think C++ is that bad, but it's true that it's quite convoluted (as is Java, for that matter, but it's less obvious because it hides the low-level parts). Personally, I prefer Python: I find it much easier to get a working prototype in Python than in C or C++. Generally, my preferences by application are such: C for embedded programming and small-ish things that need to be self-contained/run very fast/etcetera, C++ for things that need C's low-level capabilities but benefit greatly from object-oriented design (e.g. 3D games), and Python for essentially everything that isn't critically dependent on self-containedness or speed. I used to be a Java fanboy, but I haven't done anything with it for a long time, since I've become increasingly frustrated with Sun, and Python can give you almost everything Java can. --Link (t•c•m) 18:10, 7 November 2010 (UTC)
- "C is to C++ as lung is to lung cancer" ;-). Seriously, C is a very good language for its niche. It's rather small, and rather consistent. It only has a small number of warts, and most of them turn out to be rather well-considered features on closer inspection. As a result, C is fairly easy to learn. It also exposes much of the underlying machine, so its pedagogically useful if you want people to be aware about how computers work. Everybody who knows assembler and C also has a fairly good idea of how nearly all C features are mapped into assembler. C++, on the other hand, is a very large, very rich, very overladen language. I'd be surprised to find anybody who actually "knows C++" in a full sense. C++ is rather ugly - it has come to fame because it allowed people to reuse C knowledge and C infrastructure (Compilers, linkers, and most of the tool chain) while supporting objects and classes (in fact, early on an intermediate form was called C with classes). Because of that, it took off and is now widely used, albeit still but-ugly. The only reason to learn C++ is to expect to be required to use it for some project. Java (programming language) is a better language, as is C Sharp (programming language), and if you go into exotics, Smalltalk and Common Lisp Object System are both cleaner and prettier (although one might argue that Scheme is to Common Lisp as C is to C++ ;-). --Stephan Schulz (talk) 10:03, 7 November 2010 (UTC)
- All of those things can be important, but performance only matters some of the time. Some, even most, projects will succeed just fine if they run 100 times more slowly than they could, so programmers shouldn't worry about wasting cycles. (Knowing enough about algorithms to get optimal big-O is usually more worthwhile.) But writing well-designed programs always requires thought; novices should start solving that problem, and worry about performance when they have to. Paul (Stansifer) 02:41, 7 November 2010 (UTC)
November 6
Batch file to control network settings
My friends and I often get together and play games via LAN, but to do so, we always have to change our network settings considerably. First, we disable Windows Firewall. Then, we open 'Network Connections' and disable 'Wireless Internet Connection'. I don't see how those two steps are necessary, but some of our games seem to take issue if we don't do them. :\ Then we right-click on 'Local Area Connection', select 'Properties', then 'Internet Protocol (TCP/IP)', then 'Properties', select 'Use the following IP address', and type one in. I think that's called setting up a static IP? Anyway, then we go back to 'Network Connections' and do the same for '1394 Connection 13'. Is there were a way to create a batch file that would automate this? And hopefully one that would reverse it too. KyuubiSeal (talk) 01:12, 6 November 2010 (UTC)
- The netsh command is used to manipulate most of the Windows IP stack parameters from the command line. here is an example. 87.115.152.166 (talk) 01:54, 6 November 2010 (UTC)
- It sets the 'Local Area Connection' correctly, but nothing else. At least that's a third done though. KyuubiSeal (talk) 03:31, 6 November 2010 (UTC)
- netsh interface set interface "Local Area Connection" DISABLE
- netsh interface set interface "Local Area Connection" ENABLE
- 87.115.152.166 (talk) 03:39, 6 November 2010 (UTC)
- I can get it to set the static IPs correctly, but I can't disable Windows Firewall and the wireless network. When I try swapping in 'Wireless Network' for 'Local Area Connection' in the above lines, it gives me this error:
One or more essential parameters not specified The syntax supplied for this command is not valid. Check help for the correct syntax. Usage set interface [name = ] IfName [ [admin = ] ENABLED|DISABLED [connect = ] CONNECTED|DISCONNECTED [newname = ] NewName ] Sets interface parameters. IfName - the name of the interface admin - whether the interface should be enabled (non-LAN only). connect - whether to connect the interface (non-LAN only). newname - new name for the interface (LAN only). Notes: - At least one option other than the name must be specified. - If connect = CONNECTED is specified, then the interface is automatically enabled even if the admin = DISABLED option is specified. —Preceding unsigned comment added by KyuubiSeal (talk • contribs) 00:07, 7 November 2010 (UTC)
Is there a way to get a "wikitable sortable" table to correctly sort by "year"?
I am specifically working with the table at The 100 Best Books of All Time, the entries of which have been altered and provided below as an example:
Title | Author | Year | Country |
---|---|---|---|
Dts Example | No one | 2000 BC | Nowhere |
Things Fall Apart | Chinua Achebe | 1958 | Nigeria |
Epic of Gilgamesh | Anonymous | 18th or 17th century BC | Mesopotamia |
Book of Job | Anonymous | ? | Israel |
Mahabharata | Anonymous | 4th century BC – 4th century AD | India |
Dtsh Example | No one | Late 2nd century BC | Nowhere |
Dts Example | No one | 3000 BC | Nowhere |
One Thousand and One Nights | Anonymous | 9th century | Arabia, Persia, India |
The Decameron | Giovanni Boccaccio | 1349–1353 | Italy |
Don Quixote | Miguel de Cervantes | 1605–1615 | Spain |
Ramayana | Valmiki | 3rd century BC – 3rd century AD}} | India |
Aeneid | Virgil | 29 – 19 BC | Italy |
Leaves of Grass | Walt Whitman | 1855 | USA |
See what happens when you sort by the "Year" column.
I started with template:Sort (eg. "{{sort|850|9th century}}" which should sort by "850" and display "9th century", right...?).
Template:Dts seems to work well enough, but does not permit an "alternate text".
Template:Dtsh-with-text-following-it just doesn't seem to work.
Is there some way to deal with this -- ie., to get that "Year" column with entries of the kind shown above to sort correctly -- that I am just not seeing? WikiDao ☯ (talk) 01:45, 6 November 2010 (UTC)
- I would have thought {{ntsh}} would work, but for some reason it doesn't like those -ve values:
Title | Author | Year | Country |
---|---|---|---|
Dts Example | No one | -2000 | Nowhere |
Things Fall Apart | Chinua Achebe | 1958 | Nigeria |
Epic of Gilgamesh | Anonymous | 18th or 17th century BC | Mesopotamia |
Book of Job | Anonymous | ? | Israel |
Mahabharata | Anonymous | 4th century BC – 4th century AD | India |
Dtsh Example | No one | Late 2nd century BC | Nowhere |
Dts Example | No one | -3000 | Nowhere |
One Thousand and One Nights | Anonymous | 9th century | Arabia, Persia, India |
The Decameron | Giovanni Boccaccio | 1349–1353 | Italy |
Don Quixote | Miguel de Cervantes | 1605–1615 | Spain |
Ramayana | Valmiki | 3rd century BC – 3rd century AD | India |
Aeneid | Virgil | 29 – 19 BC | Italy |
Leaves of Grass | Walt Whitman | 1855 | USA |
- 87.115.152.166 (talk) 03:30, 6 November 2010 (UTC)
- Yes I had tried {{ntsh}} too, forgot to mention that. WikiDao ☯ (talk) 12:36, 6 November 2010 (UTC)
- Ah, which {{Nts}} explains. You could hack it by having a search index that you manually order (e.g. {{subst:User:Jimp/sandbox|3}}-417BC). 87.115.152.166 (talk) 03:35, 6 November 2010 (UTC)
- Yes I tried that as mentioned in question. WikiDao ☯ (talk) 12:36, 6 November 2010 (UTC)
- Ah, which {{Nts}} explains. You could hack it by having a search index that you manually order (e.g. {{subst:User:Jimp/sandbox|3}}-417BC). 87.115.152.166 (talk) 03:35, 6 November 2010 (UTC)
- Have you read Help:Sorting? ---— Gadget850 (Ed) talk 03:39, 6 November 2010 (UTC)
- Yes, the problem is that the information there doesn't seem to help with my specific set of difficulties. {{nts}} says "Negative numbers do not sort correctly with this template" but doesn't really say why, or how to get them to do so instead.
- I have also tried using just html, for example: "<span style="display:none" class="sortkey">-1750</span> 18th or 17th century BC". Haven't got that sort of thing to work yet either; still working on that.
- The primary problem seems to be with sorting "negative" or "BCE" dates along with "positive" or "CE" dates in the same column. Because none of what I would have thought to be the applicable templates seem to treat negative numbers numerically. WikiDao ☯ (talk) 12:36, 6 November 2010 (UTC)
- Have you read Help:Sorting? ---— Gadget850 (Ed) talk 03:39, 6 November 2010 (UTC)
Windows 7 file permissions
I have recently installed Win 7 Professional 64-bit. Upon trying to copy over my personal Apache setup, I was stumped by my inability to save a configuration file change. At first I thought the file(s) might be locked, but this issue is system-wide: I simply can't modify a file outside of my User directory. I've reviewed the web and can see that many users have had similar issues, but can't find anything that fully addresses this.
My account is an administrator and if I review the "effective permissions" for a particular file, it says that I have full control of the file. Yet I can't save a modification to it. The only broad solution to date is to completely turn off User Account Control. Here are two additional clues: explicitly making my account name the "owner" by itself does not change anything, but if I then add my account to the main Permissions area, then I can edit the file. Granting "full control" to the Users group also fixes the problem, but this is not an appropriate solution.
This situation is, to put it mildly, absurd. Is there actually a robust solution to this, one that does not require messing with permissions to accomplish simple tasks? I thought Win7 was supposed to be a panacea from Vista, and I encounter an even stranger problem right from the start, setting me back hours. I've about had it with MSFT.
Thanks, Riggr Mortis (talk) 06:40, 6 November 2010 (UTC)
- I don't understand. Preventing modification of files outside the user directory is kind of the point of UAC, and you seem to want to keep UAC turned on, but you say the behavior you observe is absurd. What behavior do you want? You can right-click a program and select "run as administrator" to run it without UAC restrictions.
- The "improvement" in Windows 7 was that Microsoft exempted its bundled software, such as Explorer, from UAC at the default notification level. That makes UAC nearly useless, since some of the exempt programs can be coerced into executing arbitrary code, but people think it's better because they see fewer annoying prompts. -- BenRG (talk) 08:26, 6 November 2010 (UTC)
- Actually I'm more mystified then that. AFAIK, programs running without UAC can modify most files outside the user directory. The root, program files directory, Windows directory and other system directories are protected against modification for obvious reasons. If you are trying to save a config file to a program directory, while that's usually considered a bad idea in most OSes nowadays AFAIK (even if it was common in some Windows in the past) except for portable apps (which shouldn't be in the program files directory) you can do so by putting that program outside the program files directory. Nil Einne (talk) 10:33, 6 November 2010 (UTC)
Regarding config files, of course... but Apache defaults to \Program Files\ for its entire install, and since I do one thing with Apache I'm not getting fancy.
What behavior do I want? Well, my account is marked Administrator, and I would expect that fact to be sufficient to edit a bloody text file anywhere on the drive, all other things being equal. Are you suggesting it's not odd that the system tells me I have effective "full control" permissions over, say, "C:\Program Files (x86)\Apache\README.txt", yet I don't? Riggr Mortis (talk) 21:09, 6 November 2010 (UTC)
BenRG: it's not that I want to keep UAC on. I will probably end up leaving it off entirely. I did on Vista, but at the same time I don't remember UAC being that intrusive on Vista. If someone simply said "UAC and administrative permissions logically conflict on Win 7", well that would answer my question, I suppose. Riggr Mortis (talk) 21:20, 6 November 2010 (UTC)
- The idea of UAC is that your user account has administrative privileges but most processes are started with only some of those privileges, so yes, this is expected behavior. It's somewhat more like the Java applet security model than the traditional Unix model. I don't know the details well enough to respond to Nil Einne above. -- BenRG (talk) 23:23, 6 November 2010 (UTC)
- AFAIK Vista has the same behaviour. See [3] which mentions similar issues. However virtualisation is available on some versions as a stopgap measure, see [4]. User Account Control mentions this as well. Of course another option would be if Apache really wants to put the config files in program files directory would be for it to be better designed to utilise UAC and raise a prompt when it wants to modify the config. As BenRG has said, Windows's UAC is sort of a way of limiting privilages to apps even if you are technically running as administrator. Partially because very few people were actually using limited accounts as has been recommended for a long time, partially because many Windows programs are shoddily designed and do expect full administrator privilages and UAC was fairly effective in convincing developers to design their programs better I would say. If you genuinely want full adminstrative priviliges all the time and for all programs then turn it off. Of course this isn't recommended in any common OS I know of (and in fact some like Ubuntu actively try to stop people running as root/superuser all the time). Nil Einne (talk) 01:41, 7 November 2010 (UTC)
urls
I have 700 html files, and I need to extract all urls from them which contain "/example123/". I'm on Windows 7. How could this be done, preferably with free software? 82.44.55.25 (talk) 10:30, 6 November 2010 (UTC)
- You can get grep for free for Windows and then grep for "http[^'\"]*/example123/[^'\"]*". The exact syntax of the grep may depend on the implementation of grep in Windows. -- kainaw™ 14:31, 6 November 2010 (UTC)
- I found a windows gui version for it, which has worked quite well. Thanks 82.44.55.25 (talk) 14:59, 6 November 2010 (UTC)
- Grep is good. I also wrote a short vbscript for this in 07. Maybe it will be of use to somebody.Smallman12q (talk) 15:25, 6 November 2010 (UTC)
- I found a windows gui version for it, which has worked quite well. Thanks 82.44.55.25 (talk) 14:59, 6 November 2010 (UTC)
FindURLsinHTML.vbs
|
---|
'FindURLsinHTML.vbs
'Public Domain November 2010
'Version 1.1
'Written by Smallman12q in Visual Basic Script (VBS)
'Desc: This vbs script will find the urls in a html files and save them to a text file in the directory
'Usage: Simply drag-and-drop the html files onto the script.
'1.0 (April 2007)
'1.1 (Nov 2010) Change-Added support for multiple drag and drop
Dim urlarray(2000)'Change number to whatever will be the max number of urls you expect to find
Dim urlcounter
Set objRegEx = CreateObject("VBScript.RegExp")
objRegEx.Pattern = "https?://([-\w\.]+)+(:\d+)?(/([\w/_\.]*(\?\S+)?)?)?"'Change the regex pattern here (this one does all websites)
objRegEx.IgnoreCase = True 'Change to false to not ignore case
objRegEx.Global = True
Dim targetdirectory
targetdirectory = "C:\SomeExistingFolder\SomeFile.txt" 'Place the file path where to record
Sub findit(item)
'Make sure its an html/htm file
If (Right(item, 5) = ".html") Or (Right(item, 4) = ".htm") Then
''''
Const ForReading = 1
'Read file
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.OpenTextFile(item, ForReading)
Do Until objFile.AtEndOfStream
strSearchString = objFile.ReadLine
Set colMatches = objRegEx.Execute(strSearchString)
If colMatches.Count > 0 Then
For Each strMatch in colMatches
urlarray(urlcounter) = strMatch.value
urlcounter = urlcounter + 1
Next
End If
Loop
objFile.Close
End If
End Sub
'Check to make sure drag-and-drop
If( WScript.Arguments.Count < 1) Then
MsgBox "You must drag and drop the file onto this."
WScript.Quit 1 'There was an error
Else 'There are some arguments
Set objArgs = WScript.Arguments
For I = 0 To objArgs.Count - 1 'Check all the arguments
findit(objArgs(I))
Next
'Write urlarray to file
Const ForAppending = 8
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objTextFile = objFSO.OpenTextFile(targetdirectory, ForAppending, True)
For count = 0 To urlcounter
objTextFile.WriteLine(urlarray(count))
Next
objTextFile.Close
MsgBox("Found " & urlcounter & " urls.")
End If
WScript.Quit 0 'Exit okay
'Resources
'http://blogs.technet.com/b/heyscriptingguy/archive/2007/03/29/how-can-i-search-a-text-file-for-strings-meeting-a-specified-pattern.aspx
'http://www.tek-tips.com/viewthread.cfm?qid=1275069&page=1
'http://www.activexperts.com/activmonitor/windowsmanagement/adminscripts/other/textfiles/
'http://snipplr.com/view/2371/regex-regular-expression-to-match-a-url/
'https://secure.wikimedia.org/wikipedia/en/wiki/User:Smallman12q/Scripts/Transperify
|
My graphics card; not good enough?
I have a ATI mobility radeon HD 4530 graphic card/processor.
I suspect it might not be enough for a program I will soon run on my computer.
The program's sytem rquirements: Minimum requirements: ATI Radeon 7200 Recommended requirements: ATI Radeon X1600
is my graphic card outdated, is it absolutely necessary that I upgrade or get a new graphic card altogether if I am to be able to run the program? Or is my current graphic card good enough to match the minimum requirements? I know it's more to it than only graphics card but it seems to me this is probably where my computer's weakness lies. It's a laptop, so maybe it's not so surprising if the graphic card isn't among the best.
Is it possible to upgrade or get new graphics cards like the ATI Radeon 7200 or ATI Radeon X1600 on the internet, or do I have to go to a software store?
Krikkert7 (talk) 12:02, 6 November 2010 (UTC)
- It is not outdated, at least for these requirements. 7200 is very old (from year ~2001 or 2002) and x1600 is newer, but still older than 4530 (x1600 is from 2006). 4530 might be quite low end device, but it still should be comparable to x1600. -Yyy (talk) 12:28, 6 November 2010 (UTC)
hm.. that's good news I would say, but quite different than what I read earlier. But that's the good thing I guess about asking around, getting several opinions. Krikkert7 (talk) 13:00, 6 November 2010 (UTC)
is it possible to distribute a large database as many ways as you want?
or, in general, no? 84.153.222.232 (talk) 13:25, 6 November 2010 (UTC)
- You need to define "distribute". If you mean "send the data to another location", the answer is "yes". You can send it over the web. You can remote connect through an ODBC interface. You can SFTP CSV files. You can put it on a CD, duct-tape it to a monkey, and send him overseas on a freighter. If you intend "distribute" to mean "show the data to users", you are referring to what most people call a "view". You can set up as many views as you like. -- kainaw™ 14:29, 6 November 2010 (UTC)
- If you mean, can you have the database exist in different chunks on different machines, you could, but it would present certain practical difficulties. --Mr.98 (talk) 14:46, 6 November 2010 (UTC)
- I think the OP means federating the database - distributing its data across many servers. This is possible, but is a feature that is significantly easier on commercial database platforms (like IBM DB2), as compared to, say, MySQL. Here's a technical-series from IBM called Data Federation and Data Interoperability. Here, for comparison, is the same concept in MySQL. Oracle calls this "clustering" although it has fewer features and capabilities; they have a white paper comparing federation to clustered distribution: Database Architecture: Federated vs. Clustered. Nimur (talk) 15:02, 6 November 2010 (UTC)
Whole Disk Encryption
Does whole disk encryption cause extra stress on a hard drive, such as for re-reading the keys and so forth? I'm specifically thinking of an SSD drive, and whether whole disk encryption will shorten its useful life due to lots of extra read/writes. —Preceding unsigned comment added by 75.75.29.90 (talk) 14:49, 6 November 2010 (UTC)
- No, it doesn't cause any significant extra stress for the hard drive. The keys (or key files) are relatively small, ussually less than 1MB so there's no extra stress there (The keys are read once on boot and stored in ram). The only increase in read/write will come from the actual extra space the encrypted form takes...though this ussually isn't significant (depends on algorithm). The actual encryption may stress the CPU a bit though...unless the encryption is performed by a dedicated chip on the hard-drive. But overall, the answer is no...disk encryption doesn't stress/reduce the life of the hd.Smallman12q (talk) 17:07, 6 November 2010 (UTC)
- When using popular full-disk encryption products like TrueCrypt and BitLocker, the encrypted data doesn't take up any more space. There are no extra disk accesses whatsoever except at boot time. Well, technically, since the encryption code takes up a small amount of RAM, there might be a little bit more virtual memory paging, but I doubt you'd notice the difference. -- BenRG (talk) 23:13, 6 November 2010 (UTC)
smallest, most compact and self-contained C++ compiler for Windows
so, my favorite editor is notepad2, which is TINY, and totally self-contained. Really just an .exe and maybe an .ini to go with it. I realize, due to at least standard libraries, that I do not have much of a hope of finding a similar Windows c++ compiler. Still, I dare hope.
what is the smallest, most compact and self-contained, C++ programming environment for Windows? We're talking something that you could run on a netbook and is not a messy, huge environment, but just a compact set of files. thank you! 84.153.207.135 (talk) 17:56, 6 November 2010 (UTC)
- Dev-C++ is pretty self-contained. It is a complete IDE, including a text-editor and build system; it uses MinGW (gcc) as its compiler. Alternately, you can download gcc, packaged for Windows as mingw ("Minimalistic GCC for Windows), if you only want a command-line compiler. Microsoft Visual Studio Express is pretty light-weight as well. Nimur (talk) 18:03, 6 November 2010 (UTC)
- Microsoft's C++ compiler runs from the command line and is configured with environment variables, very much like gcc (though all of the command line switches have different names). I haven't compared their sizes, but I suppose they're similar if you only keep the headers and libraries that you plan to use. Microsoft's compiler is quite a bit faster than gcc, which might matter if you want to do a lot of big builds on battery power. It also optimizes somewhat better for x86. Other than that, you could go either way. -- BenRG (talk) 23:38, 6 November 2010 (UTC)
how does this work?
how does http://www0.us.ioccc.org/years.html#1988 "westley" here work? (1988). I mean, could someone write a 300-word paragraph describing in excrutiating detail how the pseudocode would look if it were written in nearly formal English? I am just having too much trouble following it... What the program does: SPOILER: . it's supposed to calculate pi by looking at its own area. strikeout for spoiler. /SPOILER84.153.207.135 (talk) 18:22, 6 November 2010 (UTC)
- Did you read the hint? It's important that you either make the adjustment in the hint, or use a really old compiler that does dumb preprocessing. I'll assume you're interested in the original version, with the dumb preprocessing.
- The first line of F_OO is 4 instances of the "_" macro separated by 3 minuses. Here's how it looks after preprocessing:
-F<00||--F-OO--;--F<00||--F-OO--;--F<00||--F-OO--;--F<00||--F-OO--;
- It has become a sequence of 4 statements. The first one is different from the others, because it didn't have a minus before it. Each statement is a pair of clauses separated by "||", the short-circuit operator which evaluates the expression on the right only if the expression on the left was false. The result of the "||" operator is not used. So the statements are equivalent to
if(!(-F<00)) --F-OO--; if(!(--F<00)) --F-OO--;
- The double 0 isn't important, it's just a zero that helps put more "foo"-looking things in the code. -F<0 is true if F>0, which will never happen in this program because F starts at 0 and decreases from there. So in the first if statement, the condition is true and the --F-OO-- gets executed. --F-OO-- is equivalent to (--F)-(OO--) meaning F is decremented, and OO is decremented, and the old value of OO is subtracted from the new value of F, but that last part doesn't matter since the result of the subtraction is not used. From their initial values of 0, F and OO have both become -1.
- In the second statement, --F<00 causes F to first be decremented again, then its new value (-2) is compared to 0. -2<0 is true so the --F-OO-- is not evaluated. The third and fourth statements act likewise, bringing F down to -4 while OO stays at -1.
- The following lines are similar. For each line, the number of times F is decremented is the number of "_" expansions in the line, while OO is decremented only once per line. At the end, OO is the count of lines, corresponding to the diameter of the circle, and F is the total count of "_" expansions, corresponding to the area of the circle. They're both negative, but that's fixed in the printf statement. 4*area/diameter/diameter is printed. 67.162.90.113 (talk) 21:48, 6 November 2010 (UTC)
- There's an article at International_Obfuscated_C_Code_Contest#Examples...though its scant on details.Smallman12q (talk) 01:30, 7 November 2010 (UTC)
bandwidth
Approximately how much bandwidth is used when a program, such as a web browser or crawler, asks the remote server how new a file is? 82.44.55.25 (talk) 21:22, 6 November 2010 (UTC)
- OK so here's my browser asking Wikimedia about the file http://bits.wikimedia.org/skins-1.5/vector/images/arrow-down-icon.png?1
- My browser sends:
GET /skins-1.5/vector/images/arrow-down-icon.png?1 HTTP/1.1 Host: bits.wikimedia.org User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US;rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 (.NET CLR 3.5.30729) Accept: image/png,image/*;q=0.8,*/*;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://bits.wikimedia.org/skins-1.5/vector/main-ltr.css?283-5 If-Modified-Since: Wed, 05 May 2010 19:06:38 GMT If-None-Match: "bc-485dd86856b80" Cache-Control: max-age=0
- And the server replies the file was modified in May:
HTTP/1.1 304 Not Modified Date: Sun, 07 Nov 2010 08:35:46 GMT Via: 1.1 varnish X-Varnish: 1179536182 Last-Modified: Wed, 05 May 2010 19:06:38 GMT Cache-Control: max-age=2592000 Etag: "bc-485dd86856b80" Expires: Tue, 23 Nov 2010 14:42:06 GMT Connection: keep-alive
- That's 830 bytes, but most of the stuff the browser sends are not necessary, so you can make it less than half a K. F (talk) 08:45, 7 November 2010 (UTC)
- It's also gzip encoded (Look at the Accept-Encoding line), so it's smaller. However there is the TCP/IP overhead, which could easily make up for what the compression gains. Shadowjams (talk) 04:19, 8 November 2010 (UTC)
November 7
.sla extension on Scribus
I'm trying to open a .sla document on the latest Scribus version, but it says it doesn't support this file format. I know .sla is a Scribus file extension, so what is the problem? Thank you a lot. Leptictidium (mt) 10:59, 7 November 2010 (UTC)
- Three possibilities spring to mind:-
- 1 - The file has a .sla extension, but was created by another program that uses the same extension rather than Scribus
- 2 - The file has had it's extension changed and is actually (for example) a .doc file
- 3 - The file is corrupt.
Exxolon (talk) 11:26, 7 November 2010 (UTC)
- The file in question is the Welcome to Wikipedia brochure, downloaded from the Wikimedia Bookshelf website. Is there any problem with the file? --Leptictidium (mt) 11:52, 7 November 2010 (UTC)
- The outreach page you cite says the document needs Scribus 1.3.5, and indeed the header info in the file says "Version="1.3.5.1". 1.3.5 is newer than the standard version available for download from the Scribus site or in Linux package repositories. The outreach page also says where to get a sufficiently new version. -- Finlay McWalter ☻ Talk 16:16, 7 November 2010 (UTC)
Ping
-What is the use of Ping command in command prompt?
like, "ping wikipedia.com" ? Max Viwe | Wanna chat with me? 14:56, 7 November 2010 (UTC)
- To test network connectivity, response time, status of a sites server, etc. The Ping article explains in more detail. 82.44.55.25 (talk) 15:01, 7 November 2010 (UTC)
you're playing pingpong with Wikipedia. (This means it must be configured to play pingpong -- not all servers are). The numbers you see are how long (in milliseconsd) the ball takes to get to Wikipedia and back. (1000 would mean it takes on second to go there and back). Normally ping is not recreational, it's just to see if you have a netwokr connection with the other side and how long it takes to get a virtual ping-pong ball (a small packet of data) there and back again. 84.153.212.109 (talk) 16:10, 7 November 2010 (UTC)
Python object creation failure idiom
I'd like to write
Thing = MyObject(...mymin,mymax...) ## returns None if mymin ≥ mymax if Thing:
but a moment's experimentation shows that this won't work: Thing receives an object even if MyObject.__init__ says return None. Several less elegant approaches occur to me:
- Raise a custom exception.
- Give MyObject a field self.valid, to be read only once.
- Take the validity testing (typically more complicated than this toy example) out of MyObject.
What's the customary way? —Tamfang (talk) 20:05, 7 November 2010 (UTC)
- I think it's really going to depend on context, and your example is a bit too abstract to clarify (I appreciate it's deliberately abstract). If constructing a MyObject with those parameters is bad (like asking for a new network socket with an invalid socket family) then that's surely a plain old programming error, and an exception is called for (and you do the caller a favour if it's as soon as possible - at the constructor, rather than returning an essentially useless object). If, however, an "invalid" MyObject isn't entirely useless (or that it might, by some other means, sensibly become useful) then maybe MyObject should have a state variable (I'm-useless-now, I'm-useful, I-used-to-be-useful). Some objects evolve (sockets do that), so if the bad condition is a temporary one, an exception doesn't seem appropriate. As a general rule, exceptions should be exceptional, and someone who reads the docs properly and upon whom fortune smiles should never see one (or should be able to avoid one without much effort). -- Finlay McWalter ☻ Talk 20:51, 7 November 2010 (UTC)
- A related point, about custom exceptions. What kind of exception, or whether you need to define a bunch of different exceptions, is down to you thinking about what a caller would reasonably do with the different exceptions. If your code can raise three different types of custom exceptions, that means you think there are circumstances in which a reasonably caller would want to catch only one or two of those three; if they'll always catch only all or none, then they might as well all be the same type (and you can still stuff any relevant details into the exception object if needed). I occasionally see things that amount to the-first-parameter-was-wrong-exception vs the-second-parameter-was-wrong-exception, when surely wrong-parameter-exception would be all that's useful (with a value inside it, or just a string, saying which). -- Finlay McWalter ☻ Talk 21:00, 7 November 2010 (UTC)
- The customary way is to raise a ValueError; that's meant for exactly this kind of situation, where the type of the function arguments is correct, but the values are meaningless to the function. Specify what went wrong in the exception message, e.g.
raise ValueError("mymin must be less than mymax")
- Yes, that's overwhelmingly true. Curiously the very example I randomly chose is the one thing I can find that, in the case of an obviously bad parameter doesn't - socket.socket(1234, socket.SOCK_STREAM) raises a socket.error. For everything else (in my epic 10 minute long quest typing stupid values into various Python functions) gets a ValueError every time. -- Finlay McWalter ☻ Talk 22:48, 7 November 2010 (UTC)
"but a moment's experimentation shows that this won't work"
- That's not entirely true. You can easily make it work by assign a factory function which creates your objects, to the variable MyObject, so that the call MyObject() calls this function instead of the class directly. For all effects and purposes, it would work the same way as before, except that this function now has the option of returning None. --128.32.129.245 (talk) 01:02, 8 November 2010 (UTC)
- The class method that creates a new instance is called __new__, not __init__. __init__ initializes an already-created instance, which it gets as an argument, and doesn't return anything. But I really don't think that you want to override __new__. You should either throw an exception from __init__ or make a separate factory function, as other people have already said. -- BenRG (talk) 10:15, 8 November 2010 (UTC)
Network Security
Hello. When I returned home, the wireless light on my modem was on, but my laptop was off and my peripheral hardware aren't connected wirelessly. Was someone trying to piggyback on my Internet? Thanks in advance. --Mayfare (talk) 22:36, 7 November 2010 (UTC)
- At least for the Netgear and Linksys devices I've seen, the wireless light being on just means the modem's wireless function is enabled. If it's flashing that would suggest traffic (but I don't know if some random device somewhere innocently interrogating the networks it sees would make for much if any flashing). The best thing for you to do is to make sure you've got the wireless security settings turned on (WPA/WPA2, not WEP, not unencrypted). -- Finlay McWalter ☻ Talk 22:52, 7 November 2010 (UTC)
November 8
EXIF, meaning of "Optical Zoom Code"
Does anyone know how to interpret the line
Optical Zoom Code : 7
in the EXIF data of a photo? The camera is a Canon PowerShot A590 IS. I'm assuming it means 4X optical zoom (the maximum) but I'd like to find a resource for interpreting this stuff more generally. Thanks, --Trovatore (talk) 01:36, 8 November 2010 (UTC)
- Considering its widespread use, EXIF is surprisingly nonstandardized. The "official EXIF specification" is not really official - it is defined by a small organization that has loose affiliations with several Japanese camera manufacturers called CIPA (Camera & Imaging Products Association). Our EXIF article links to the EXIF DCF specification version 2.3 - and as you can see, "optical zoom code" is not a standard tag. It is an "internal use only" tag for the manufacturer, by the manufacturer - and any conformity to any spec should be considered "use at your own risk."
- Optical Zoom Code usually directly maps to a zoom-state; your camera might use codes (0-7) or (0-127) or so on to denote all possible steps between minimum and maximum zoom. The exact mapping of zoom-code to focal-length will vary from manufacturer to manufacturer (and model to model, even firmware to firmware). You can use a tool like gphoto2 to look up the mappings between zoom-codes and actual focal-lengths for your particular camera (if you trust their tables).
- If you really need to know the true optical zoom, the EXIF tag you should use is FocalLengthIn35mmFilm. You can compare this to your camera's FocalLength tag to determine the optical zoom as a "1x" or "2x" number. Nimur (talk) 04:09, 8 November 2010 (UTC)
- Thanks much. Can't seem to find that tag. But there's FocalLength and LongFocal and ShortFocal, and FocalLength equals LongFocal which is four times ShortFocal, so I take that to mean the picture was shot at 4x optical zoom. --Trovatore (talk) 04:58, 8 November 2010 (UTC)
Mac OS X version adoption rates
Where could I find a breakdown percentage of Mac users who are running Mac OS X 10.6, 10.5, 10.4, even Mac OS 9? --70.167.58.6 (talk) 08:38, 8 November 2010 (UTC)
- They did a poll over at MacOSXHints a few months back, if I remember correctly, which showed that 10.6 adoption rate was over 70%. 10.5 was something like 18% or 19%, 10.4 less than 5%, and a smattering of people with lower systems. Unfortunately I can't find the link. Of course, that's a bit of a geek site which might queer the results a bit. I suspect upgrade rates are lower for people who use their machines for business purposes (why risk the possibility of having to rewrite your business webpage because of software changes?). --Ludwigs2 09:41, 8 November 2010 (UTC)
- ah, found it: [5] as of last january. 68% 10.6, 21% 10.5, 8% 10.4, and the remainder of the X systems accounting for roughly 2%. 21,000 votes, though, so it's a decent sample even for a convenience sample. The poll didn't go into os 9 users, thought the previous poll on system version, which was held back in 2006, showed only 0.69% of users using os9 and only 0.21% using os8 or earlier. --Ludwigs2 09:52, 8 November 2010 (UTC)
- It may be a 'decent sample' but clearly not a random one, not even close (to state the obvious, people visiting a site like MacOSXHints are likely to be fairly technically minded). I would trust a decent scientific poll with only 2000 people more then this one with 20000 votes. Nil Einne (talk) 12:30, 8 November 2010 (UTC)
- oh, no question. but I've never seen any other type of site do such a poll, so I don't know how we'd get around that problem. I mean, I suspect that Apple has that data somewhere in its nefarious databanks (just from phone-homes on installs and updates), but if so they do not seem to be inclined to share. --Ludwigs2 22:10, 8 November 2010 (UTC)
- Here is an article from February 2010 which says that 10.5 was actually still the most common version at that point, but 10.6 adoption was rapidly increasing. The numbers look far more realistic than the ones given above, which look like early adopter rates. --Mr.98 (talk) 00:32, 9 November 2010 (UTC)
HELP. How do I remove specific auto-suggestions from my Google Chrome's URL bar?
Anytime I start typing any web address with "d," guess what is the first site Google Chrome suggests???
It's DailyDiapers.com!
I feel haunted by that reappearance; even though I only go there on Incognito mode now, I still get reminders that I used to surf it on normal mode.
A BIG problem would be if a friend borrowed my laptop and decided to type up a website that started with "d." You can imagine what could happen next!
Now how do I remove that offending auto-suggest from ever showing up again?
(I had to register a new name just to ask embarrassing questions; I wasn't even going to post this from my IP because that could trace back to me as well.) --EmbarrassedWikipedian (talk) 09:58, 8 November 2010 (UTC)
- Step by step instructions from http://www.google.com/support/forum/p/Chrome/thread?tid=6501481bab1c67eb&hl=en
Turning off Auto-Suggestions (also see Reference 1)
1. Clear your browsing history
2. Click the Tools menu
3. Select Options
4. Click the Under the Hood tab and find the Privacy section
5. Deselect the 'Use a suggestion service to help complete searches and URLs typed in the address bar' checkbox.
6. Click Close.
General Rommel (talk) 10:16, 8 November 2010 (UTC)
:Extra note :Under my dev versoin of Chrome, it appears as 'Use a prediction service' not suggestion service General Rommel (talk) 10:18, 8 November 2010 (UTC)
Long-Term Data Archival
Hi Everyone,
I am seeking to store large amounts (~ 2TB) of data for a very long (minimum <strikeout>50</strikeout> 25 years) period of time. The media has to be rewritable, and the data will be very infrequently (annually or semi-annually) rewritten. The data must be preserved for atleast <strikeout>50</strikeout> 25 years. Which of the following types of storage media will (or is atleast most likely to) preserve data for this length of time?
- Ultrium LTO-5 750GB Rewritable Tape Cartridge
- Removable Serial ATA Hard Disk Drive
- Ultra Density Optical 30GB Rewritable Disk
- Multi-Level Cell Solid State Drive
Thanks, everyone. Rocketshiporion♫ 12:36, 8 November 2010 (UTC)
- One of the big problems you'll have either way is whether you'll be able to edit any of those mediums in 50 years. 50 years is a loonnngg time in terms of information technology — consider that this is what a computer looked like even less than 50 years ago. (Computers from 1960 did not even use integrated circuits!) Will SATA be around in 2060? I wouldn't bet on it. SATA itself dates from only 2000 or so, as far as I can tell. Even parallel ports only go back 40 years.
- Rather than trying to find one system that will last for 50 years, might I recommend that a rotating system be used? Pick a system that will last for 10 years. After 10 years, upgrade it to whatever the equivalent is at that time. After another 10 years, repeat. You've both reduced your necessary lifespan for the system significantly, and also basically guaranteed that it'll be compatible with whatever fancy new computers there are in 2060. 10 years is an acceptable tech jump — people have all sorts of common methods of playing 10-year-old software, or using 10-year-old peripherals — whereas 50 is a bit much. And if you forget to upgrade it after a point... well, at least you're 10 years more up to date than you would have been otherwise.
- A brief analogy. The Bible did not survive to modern times because people made one very good copy of it and kept it very safe. (There are a couple verrry old copies, but the fact that they have survived is basically attributable to luck.) It survived because people were constantly re-copying it, on modern paper, with modern technology.
- All of the above is making the assumption that you'd have said drives in a place accessible to upgrades. If you're putting it on a rocket ship, well, I suppose that would introduce additional variables. --Mr.98 (talk) 14:00, 8 November 2010 (UTC)
- As the first Parallel ATA HDDs appeared in 1986, and there are still motherboards being sold with PATA connectors, I am quite confident that SATA (which AFAIK appeared in 2004) will be around atleast as long - until 2028. In addition, most of the SSDs of which I'm aware use SATA, and those that don't use FC. Magneto-Optical disks (which have been superseded by UDO disks) were introduced in 1985, and they are still in use today, 25 years later. Although I agree that 50 years is a very looooong time for data storage media, and so I shall amend my question - see above. Rocketshiporion♫ 19:38, 8 November 2010 (UTC)
- I would just also note that the tech difference between 2060 and 2010 is probably a lot greater than 1985 and 2010, thanks to Moore's Law and its analogues. Whether such a thing would impact peripherals and media, one can only speculate. But I would not be surprised if it did (especially since the bandwidth rates on many modern ports leave quite a lot to be desired for, even with present requirements, much less future ones). --Mr.98 (talk) 23:27, 8 November 2010 (UTC)
- I'm pretty sure your AFAIK is wrong. I bought a SATA motherboard in 2003. I may have also purchased a SATA HDD then although I'm less sure of that. Our article claims a creation date of 2003 but confusingly describes how many sold between 2001 and 2008. I think 2001 is more likely to be accurate as I find people discussing SATA stuff as if it existed on usenet in 2002 [6]. It's possible it was not finalised until 2003 so the 2002 and 2001? stuff were based on the draft standard. Nil Einne (talk) 00:28, 9 November 2010 (UTC)
- As the first Parallel ATA HDDs appeared in 1986, and there are still motherboards being sold with PATA connectors, I am quite confident that SATA (which AFAIK appeared in 2004) will be around atleast as long - until 2028. In addition, most of the SSDs of which I'm aware use SATA, and those that don't use FC. Magneto-Optical disks (which have been superseded by UDO disks) were introduced in 1985, and they are still in use today, 25 years later. Although I agree that 50 years is a very looooong time for data storage media, and so I shall amend my question - see above. Rocketshiporion♫ 19:38, 8 November 2010 (UTC)
- I would point you at RAID 6 which is a way of using 4 or more hard drives in a collection such that data integrity can fully survive the simultaneous failure of 2 drives. Rather than assuming that any method will work flawlessly for 25 years, it would generally be better to develop fault-tolerant systems. Whenever you update the data, run data integrity checks and replace any drives that generate errors. As long as the same data block never fails on three drives simultaneously, the data will be safe. For added safety you can remove the drives to different remote storage locations or create additional off-site copies to guard against localized events (such as a fire) destroying multiple drives at once. Dragons flight (talk) 01:32, 9 November 2010 (UTC)
What does this line of assembly mean?
Hi! I'm working on some assembly code for class and I haven't been able to understand this line:
movzbl (%eax), %eax
For some context, %eax, before calling the operation above was a char *. After this function, the value of %eax became an integer. From the little I picked up, this has something to do with turning of the first 24 bits (?) of %eax, and doing something to the rest (?) Could some one please me out? Thanks! —Preceding unsigned comment added by Legolas52 (talk • contribs) 14:58, 8 November 2010 (UTC)
- movzbl means retrieve (move) a byte (a char), add 24 zero bits to it (to make it a long, which is 32 bits in x86 assembly, whatever long is in C), and store it. The parentheses mean "the memory at address" (so they correspond to the * operator in C). So it's equivalent to
eax=*(char*)eax;
in C, bearing in mind that assembly has no strong type system so things like casts are entirely implied by how you use the values in question. You thought it meant "turning off the first 24 bits", but it actually means "adding 24 'off' bits". --Tardis (talk) 15:25, 8 November 2010 (UTC)
Eye Altitude in Google Earth
What does the eye altitude shown on the Google earth screen mean? —Preceding unsigned comment added by 113.199.204.115 (talk) 15:15, 8 November 2010 (UTC)
Mongolian script support
Hi, I know this is getting a recurring question here, but I checked Multilingual support as well as Multilingual support (East Asian) and did some Google searching, and yet didn't manage to find from where I can download support for the Mongolian script so my browser can get able to view this properly: ᠤᠯᠠᠭᠠᠨᠪᠠᠭᠠᠲᠤᠷ. Could you help me? --Theurgist (talk) 15:40, 8 November 2010 (UTC)
Subliminal
I spend all day in front of a computer. I want to flash subliminal messages to myself on the screen at random intervals, for example "Get a girlfriend" or "Improve your life". What programs could do this, easily and at low or zero cost? Thanks. —Preceding unsigned comment added by 178.66.8.154 (talk) 19:40, 8 November 2010 (UTC)
- "J", would be the first one. "Y" would be the last. In between, in no particular order, "I", "O", "N", "H", "T", "E", "N", "V", "A".... 85.181.151.31 (talk) 20:31, 8 November 2010 (UTC)
- There are a number of software that will do this. Here's one.
- Be aware that this type of subliminal message has never been shown to work. The original work by James M. Vicary was almost certainly a hoax designed to promote his own company that would have specialized in subliminal advertising. [7]APL (talk) 21:54, 8 November 2010 (UTC)
- You'd do better just to have your machine speak the words or pop up a dialog - there's no evidence that subliminal messages work at all with sentences, and only produce mild generic effects with simple stimuli. If you think it will work and you want to try it, a flashed image of a playboy bunny or a wad of cash will work better than the phrases 'get a girlfriend' or 'improve your life.' though keep in mind that different people have different subliminal ranges, so something that doesn't register consciously on your brain might be perfectly visible to others, which might cause you some consternation if other people are around.
- Really, though, if you think you need outside help to get yourself motivated, go spend an evening in the self-help section of your local chain bookstore. there are a number of motivational-type books that are actually reasonably effective at helping you get in gear. they'll be a lot better than toying with subliminal suggestion, anyway. --Ludwigs2 22:05, 8 November 2010 (UTC)
saving a level of game
Is their general method for saving the levels of games. ho can a save the level of old games dave1 ,dave2 —Preceding unsigned comment added by 119.154.120.194 (talk) 21:20, 8 November 2010 (UTC)
- Do you mean from the point of view of a game programmer, or from the point of view of game player? Every programmer writes their save-game code a little differently, so no. From the point of view of a player, the answer is normally no; but if you happen to be running the game within an emulator, all emulators that I have seen allow you to pause the virtual machine, and you can usually then save a copy of the virtual machine. For example, you could be using Virtual PC or VirtualBox to play your old copy of Doom 2, and then pause the game when you meet the big boss, save the state of the virtual machine to a file (meaning all of the contents of RAM, the CPU registers, etc are saved), and then load that virtual machine as many times as you want in a row until you win. Poor Doom 2 - there's no way for it to even know you've played it 666 times like this. Comet Tuttle (talk) 21:48, 8 November 2010 (UTC)
November 9
Using two different bibliographystyles in one document
I am writing a book-sized document in TeX, for which I'd like to include a bibliography for each chapter (numbered, and in order of appearance), and one at the end for the book as a whole. The one at the end should be ordered alphabetically by the first author's last name. I can get either of these individually, by using either "plain" or "unsrt" as the bibliographystyle and with the help of the "chapterbib" package. But I can not get the final bibliography to use a different style than the chapter bibliography. Any suggestions? mislih 01:07, 9 November 2010 (UTC)
Format specifier: %d %n
Hi,
Can anyone give me an example of when / how you would use the format specifiers "%d %n"?
Thanks