Jump to content

Wikipedia:Reference desk/Computing: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 117: Line 117:
::Transfering the whole windows from the 240 GB to the EMPTY 500 GB SSD requeres to delete all data on the 500 GB target device. Connect the two drives to the computer. Boot a linux live CD. Use [[gParted]] to indentify the drives only. Use [[dd (Unix)]] to copy the 240 GB to the 500 GB SSD. Shutdown after complete to reload partition tables on next boot up. Booting windows, remove the 240 GB drive before! In the disk mananger, fomerly called ''windisk'' select the remaining space and the system parition by holding the CTRL-key rightclick and choose expand. Theres no reboot neccessary but you need the rights of the windows administrator. When having more custom partitions on the drive, reboot liunx, use gparted, to move and expand the partitions. When booting windows, keep the old SSD far away from the system. You might have to register your windows copy again. Note DD is a commandline tool. Aware to copy accidently the target to the source, deleting all your data! --<span style="color:#00A000;">Hans Haase ([[User talk:Hans Haase|有问题吗]])</span> 11:46, 21 November 2015 (UTC)
::Transfering the whole windows from the 240 GB to the EMPTY 500 GB SSD requeres to delete all data on the 500 GB target device. Connect the two drives to the computer. Boot a linux live CD. Use [[gParted]] to indentify the drives only. Use [[dd (Unix)]] to copy the 240 GB to the 500 GB SSD. Shutdown after complete to reload partition tables on next boot up. Booting windows, remove the 240 GB drive before! In the disk mananger, fomerly called ''windisk'' select the remaining space and the system parition by holding the CTRL-key rightclick and choose expand. Theres no reboot neccessary but you need the rights of the windows administrator. When having more custom partitions on the drive, reboot liunx, use gparted, to move and expand the partitions. When booting windows, keep the old SSD far away from the system. You might have to register your windows copy again. Note DD is a commandline tool. Aware to copy accidently the target to the source, deleting all your data! --<span style="color:#00A000;">Hans Haase ([[User talk:Hans Haase|有问题吗]])</span> 11:46, 21 November 2015 (UTC)


:::I would just use Clonezilla. I've used it many times and have never had problems. It will copy everything from the 240 GB drive to the 500 GB drive and clear anything off the 500 GB drive for you. The drive letters will stay the same. The only thing you'll have to do after running it is expand the partition to fill up the rest of the space on the 500 GB drive. You just plug both drives into your motherboard, boot from the Clonezilla ISO, and follow the prompts.&mdash;[[User:Best Dog Ever|Best Dog Ever]] ([[User talk:Best Dog Ever|talk]]) 11:58, 21 November 2015 (UTC)
:::I would just use Clonezilla. I've used it many times and have never had problems. It will copy everything from the 240 GB drive to the 500 GB drive and clear anything off the 500 GB drive for you. The drive letters will stay the same. The only thing you'll have to do after running it is expand the partition to fill up the rest of the space on the 500 GB drive in the Windows Disk Management console. You just plug both drives into your motherboard, boot from the Clonezilla ISO, and follow the prompts.&mdash;[[User:Best Dog Ever|Best Dog Ever]] ([[User talk:Best Dog Ever|talk]]) 11:58, 21 November 2015 (UTC)


== Linux hard disk questions ==
== Linux hard disk questions ==

Revision as of 11:59, 21 November 2015

Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:

November 16

What makes cosine similarity useful for classifying documents?

Why use the cosine similarity to know how similar two documents are? That is, (A * B) / (||A|| * ||B||). A and B are sequences of word frequencies.

Couldn't you just make a table of words + frequencies for each document and subtract the value of doc A from the value of doc B? More differences in these would imply more general difference. That is A - B, A and B are sequences of word frequencies. --3dcaddy (talk) 19:01, 16 November 2015 (UTC)[reply]

You need some historical background for a complete answer... The Jaccard index (also known as a coefficient) was popular for measuring similarity and diversity among two different sets. Flowers are used often as an example of Jaccard index. My set might contain color and length of stem. Yours might have color and number of pedals. We compare based on what we both have and get a measure of similarity. It is also important to note that Jaccard index handles sparsity very well. If I forgot to write down color for half the flowers in my set, it still works for those that actually have data. Next... what if we are working strictly with binary data. Every field is a yes/no answer. It is still two sets containing different columns and there is still a lot of sparsity. The Tanimoto coefficient is an algebraic form of the set theory Jaccard index for binary sets. It is popular and common. But, it has some overhead that can be simplified. If you know the algebraic form of cosine, you will see that it is very similar to the Tanimoto formula. So, why not throw out the complexity and use cosine instead? You nearly get the Tanimoto coefficient, which is the same as the Jaccard index for binary sets. Jaccard index is already accepted, so using cosine is nearly accepted. That is the history. Why do we not just count the differences? If I say the differences count up to 132, what does that mean? Nothing really. You need to confine the answer to a range so I know the minimum and maximum values. We know that cosine is -1 to 1. If you tell me the different is 1, that is the max value. I only have to ask if the 1 means absolutely different or absolutely the same. By convention, 1 means absolutely the same and 0 means completely different (-1 means the exact opposite, but that makes no sense in most examples). If that doesn't answer the question, please ask for what I missed. I don't want to flood you with even more details that you don't think are pertinent. 162.211.46.242 (talk) 20:31, 16 November 2015 (UTC)[reply]
The short version: Each document is modeled as a vector in a high-dimensional vector space. When you have two vectors that point in nearly the same direction, they are more "similar" than vectors that are orthogonal or antiparallel, or just point in rather different directions. See also document modelling. (or maybe not, that's a rather pitiful stub) SemanticMantis (talk) 21:33, 16 November 2015 (UTC)[reply]
Thanks for the answers, now I get it. It didn't realize that counting the number of differences, instead of finding a range of values, was quite off the track. Is there any literature about such issues, that intuitively make sense, but are mathematically no-good? --3dcaddy (talk) 21:42, 16 November 2015 (UTC)[reply]
SemanticMantis is correct. Cosine similarity is strictly the cosine of the angle between the two vectors. The relative vector lengths are ignored. If you analyze the Tanimoto formula, you will recognize that it takes into account the distance between the end-points of the vectors. So, if the angle is 10 degrees, cosine is always the cosine of 10 degrees. With Tanimoto, if the vector lengths are nearly the same, the distance between the end points will be near minimal and give me a higher similarity value. If one vector is much longer than the other, the angle remains the same, but the distance between the end points is much longer and the similarity is reduced. If the cosine similarity is good enough for you, then use it. If it turns out that some vectors are extremely long and others are extremely short, you will want to use Tanimoto difference to account for that aspect of measuring similarity. Then, if you are still refining your algorithms, you can look into SVD (which I personally don't like) or convert your data into ordered strings and make a big jump into string-based similarity. 209.149.115.177 (talk) 14:50, 17 November 2015 (UTC)[reply]

Super Mario Maker

After watching YouTube videos of Super Mario Maker, I've become interested in actually buying a Wii U just to play it. It would be the first video game made in the last two decades that I would actually consider buying.

However, I'm afraid I've fallen hopelessly out of touch of modern video games. The last time I bought new video games, they were on real, physical, honest-to-gosh floppy disks put inside a nice, beautiful cardboard box I could take home, open up, and put into my computer.

I figure that these days, actual physical storage media is like Soooo last millennium! What are you, Methusalem?. So, how would I actually go about buying and installing the game? JIP | Talk 19:25, 16 November 2015 (UTC)[reply]

You can either buy a physical Wii U Optical Disc retail, or you can download from the Nintendo eShop. -- Finlay McWalterTalk 19:34, 16 November 2015 (UTC)[reply]
In case a physical disc isn't available, how would I go about downloading it? Can I do it solely using the Wii U? I already have a wired (i.e. non-wireless) broadband Internet connection, which I'm using right now to write this message. Can I use that on the Wii U? Does it have an Ethernet cable connection? How do I pay for it? Can I just use my credit card or do I have to set up some sort of new-fangled subscription account? JIP | Talk 19:40, 16 November 2015 (UTC)[reply]
The Wii U doesn't have an ethernet port, and most people use its Wifi capability; if you can't do that, you can buy a USB Wii LAN adapter which plugs into the Wii U's ethernet port. You can use a credit card with the eShop (and maybe a debit card); you can also buy physical gift cards (they're just plastic cards with numbers on them) in supermarkets which give eShop credit. -- Finlay McWalterTalk 20:05, 16 November 2015 (UTC)[reply]
You said both "the Wii U doesn't have an ethernet port and "plugs into the Wii U's ethernet port". Which is it? Or did you mean "the Wii U's USB port"? In any case, I might be better off finally buying and installing a WiFi device in my apartment. So far I've had no use for one as the only Internet-capable device I ever use is my computer, which uses the wired Internet connection, which I presume is both faster and more secure than a WiFi connection. JIP | Talk 20:23, 16 November 2015 (UTC)[reply]
Oops, yes, I meant USB port. It's just a usb<->ethernet dongle. I see them for about £10 on Amazon. -- Finlay McWalterTalk 20:29, 16 November 2015 (UTC)[reply]
Furthermore, once I get it, I'd like to play other people's levels. Can these be downloaded free of charge or are there additional charges for them? JIP | Talk 19:42, 16 November 2015 (UTC)[reply]
As I understand it, when someone is finished designing a level they tell others the level's ID code (where the ID is a 16-hex-digit number). Here's an example of people posting their IDs in the SuperMarioMaker reddit: https://www.reddit.com/r/MarioMaker/comments/3t164h/level_of_the_week_8_factory_submissions_last/ -- Finlay McWalterTalk 20:09, 16 November 2015 (UTC)[reply]
(edit conflict) There's also the 10 Mario/100 Mario Challenge which chooses several levels at random from the ones users have uploaded, and you have either 10 or 100 lives to get through them all (or skip ones that might be very difficult). I believe that after playing each level it saves snapshots of those levels in your game so that you can look at them in the level designer afterward. FrameDrag (talk) 20:19, 16 November 2015 (UTC)[reply]
You might appreciate SuperBunnyHop's review of a bunch of SMM levels submitted to him here. I think it gives a reasonable idea about what is, and crucially what isn't, possible in SMM. As constructing-stuff-in-the-game type games go, it looks to be considerably inferior to Little Big Planet and especially Minecraft. -- Finlay McWalterTalk 20:13, 16 November 2015 (UTC)[reply]
Wii U is intended to be easy and idiot proof, and it succeeds admirably at those goals. You will have no problems buying (officially licensed) games as physical media or online. For inspiration on SMM, see Bananasaurus Rex's playthrough of this insane level [1] :) SemanticMantis (talk) 21:30, 16 November 2015 (UTC)[reply]


November 17

Laptop

Whenever I close my laptop, the computer will do the same thing that happens when hitting the "print screen" button, namely copy an image of the screen to the clipboard. Does this also happen on your laptop? GeoffreyT2000 (talk) 02:28, 17 November 2015 (UTC)[reply]

no. Vespine (talk) 02:33, 17 November 2015 (UTC)[reply]
Could be something physically hitting the Print Screen button. StuRat (talk) 06:14, 17 November 2015 (UTC)[reply]
Laptops usually have a setting for what to do when the laptop is closed, such as "turn off the screen" or "log off". It is possible someone changed that to "print screen" just to be annoying. 209.149.115.177 (talk) 14:52, 17 November 2015 (UTC)[reply]
@GeoffreyT2000 and 209.149.115.177: That may be possible, but might require a registry 'hack'. My Win 7 Toshiba laptop only gives the selections of: Do nothing, Hibernate, Sleep and Shutdown. No user selectables, though it may be there in BIOS. 220 of Borg 08:18, 20 November 2015 (UTC)[reply]

Detecting that a folder contains unreachable files or folders due to deep nesting

Windows has a limit of 255 characters imits for path names according to [2]. It's fairly easy when copying or renaming folders, to get in a situation where some of your data is unreachable by ordinary means because this limit has been exceeded. The previously quoted source says robocopy can bypass this limitation. Another trick is to use the subst command from a windows shell to create an alias for a reachable folder with unreachable sub-folders, and access the nested material using the alias.

My question: Is it possible to test beforehand whether a folder contains sub-folders or files with names that exceed the name length? I ask because I would like to avoid starting time-consuming tasks that are bound to fail and possibly mess things up (zip + move). --NorwegianBlue talk 13:08, 17 November 2015 (UTC)[reply]

Best bet would be writing a Python or PowerShell (or whatever language you're comfortable with) script to recurse through all the folders in a tree and keep track of the max path length as you go. This thread has a Visual Basic (ugh) script for doing this.
The other option is to move the files elsewhere before you begin said task, to shorten the length of the paths; depending on what you're doing this for, though, that may not be possible. FrameDrag (talk) 14:42, 17 November 2015 (UTC)[reply]
The claim that Windows limits path names to 255 characters is not correct or authoritative. Microsoft's documentation, Naming Files, Paths, and Namespaces, explains that maximum path length is file-system-dependent. Certain programmatic functions in the Windows API are limited to use strings of length MAX_PATH, which is 260 characters; but most built-in utilities (including Explorer and the command prompt) are able to use extended path names, with lengths of about 32,000 characters. This has been true of all Windows operating systems for about 20 years - including Windows NT, Windows 95, and Windows CE - so unless you're using a very old system like Windows 3.1, or Windows for Workgroups, or some other ancient software, you do not need to limit path names based on the MAX_PATH limitation. When you write C or C++ code to the Windows API, if you choose to use legacy path functions, you need to be aware of these limits; but you can always use newer APIs to access the file system. If you aren't writing C or C++, (or otherwise directly linking the Windows API), you should probably ignore the "260 character" limit completely, because it doesn't apply. The shell and the individual file system may have different limitations, but this MAX_PATH is not one of them.
Nimur (talk) 16:23, 17 November 2015 (UTC)[reply]
You've misunderstood the documentation. Almost all Windows programs are limited to 260-character paths (259 + trailing NUL), including .NET programs, and there's no sign of that changing in the future. -- BenRG (talk) 18:27, 17 November 2015 (UTC)[reply]
"The Windows API has many functions that also have Unicode versions to permit an extended-length path for a maximum total path length of 32,767 characters." Maximum Path Length Limitation. I think you are conflating two different limitations: limits that are intentionally imposed by specific programs, and limits that are enforced by the operating system itself. Nimur (talk) 20:51, 17 November 2015 (UTC)[reply]
Maybe you misunderstood the quoted sentence to mean that any program that uses the Unicode API supports long paths. That's not the case. Long paths are only supported in a \\?\ prefixed form which doesn't follow the usual Win32 rules (for example the . and .. pseudo-directories aren't supported). Some software will work if you pass it a \\?\ path and it doesn't inspect it too closely, but software that understands and reliably supports long paths is rare. It is indeed a limit of "specific programs", not the kernel, but the "specific programs" include almost every program that has ever been written for 32-bit or 64-bit Windows.
(Cygwin 1.7+ is a notable exception; you can probably safely use long paths in a pure Cygwin environment.) -- BenRG (talk) 00:06, 18 November 2015 (UTC)[reply]
I'm not exactly sure where you got that by reading that MSDN page, I got a completely different read out of it. It says that normal X:\ path names are limited to 260 characters (X:\ + path + NUL terminator), and that \\?\ path names are (probably) limited to the file system limit. It also says that the filesystem itself and the shell are separate, and that it's possible to create perfectly valid things that the shell can't interpret correctly. It's also worth noting that the question wasn't about the char limit... FrameDrag (talk) 20:21, 17 November 2015 (UTC)[reply]
Thanks for your replies! I did some experimenting based on the assumption that Nimur's assertion was correct, before reading the discussion. Here's what I found:
  • It was possible to create and traverse hierarchies that are deeper than 260 characters with explorer (Windows 7). I created the hierarchy by unzipping a deep hierarchy twice (i.e. nested within itself).
  • However, there was some weirdness when I crossed the 260 character limit. For one thing, when I tried to text-copy (ctr-C) a folder name presented in the explorer address bar, it changed to old DOS 8-character long-name represesentation when I clicked the address bar, thus preserving a path name shorter than 260 characters, like so:
T:\AD-HER~1\HENVEN~1\2012-0~2\TAKKTI~1\XYZ-TE~1\Brukere\N\nblue\MINEDO~1\MI\HENVEN~1\2012-0~2\Takk til Kristoffer, med nye versjoner\Ny mappe
The full version of the path name in this case would have had 385 characters.
  • When trying to create new files or folders (in a directory that was beyond the limit) by right-clicking, nothing appeared to happen. However, when I pressed F5, it turned out that the folder or file had been created with the default name, but the gui hadn't been updated.
  • In cmd.exe, chdir with an argument longer than the 260-character limit failed. If I went step-by-step to the deepest "legal" directory, I could show the next level with "dir", but I couldn't chdir to a directory which gave a path name longer than the limit. I could, however, access the directory via explorer and start a shell with shift-click. But then, the path name was shown as above (T:\AD-HER~1\... etc).
  • dir /b /s failed with an error message when it reached a folder that was nested too deeply.
  • When I right-clicked the root folder and selected "properties", only the files that were below the limit appeared to be counted, ditto for the disk space used.
  • 7-zip (current stable version, which is from 2010) was able to successfully compress the entire hierarchy, preserving full-size names at each level, and to decompress it without error messages and with no obvious errors.
  • I also did an experiment that turned out to provide an answer to my original question. I tried to compress the folder to a zip-file using the built-in Windows mechanism (right clicking and selecting "send to compressed folder"). And voilá - an immediate error message reporting the first file name that was too long. So the question is answered, at least for Windows-7. Unfortunately, the problem that prompted the question occured on an XP machine that needs to be maintained (don't ask)....
-NorwegianBlue talk 22:29, 17 November 2015 (UTC)[reply]
Resolved
@NorwegianBlue: - I'd be interested (as a smug Linux rubbernecker) whether the failures you got with chdir and dir in cmd.exe also happen if you run them in Powershell. I would hazard that they do not. -- Finlay McWalterTalk 22:39, 17 November 2015 (UTC)[reply]
For what it's worth, TCC/LE, a freeware almost-compatible replacement for cmd.exe, appears to support long paths. PowerShell 2.0 appears not to, but it's rather old. I can't find any clear information about newer PowerShell versions online. -- BenRG (talk) 00:06, 18 November 2015 (UTC)[reply]
(Also, chdir is never going to work because the Win32 "current directory" is just a string that's prepended to partial paths by the Win32 parsing code that \\?\ bypasses. Windows just doesn't support long current directories.) -- BenRG (talk) 00:25, 18 November 2015 (UTC)[reply]
@Finlay McWalter: - As BenRG predicted, powershell 2.0 failed in a similar (and slightly worse) manner than cmd.exe did. From the most deeply nested reachable directory, cmd.exe was able to list the files and folders of the next level. Powershell, however, only echoed the name of the current directory. --NorwegianBlue talk 07:30, 18 November 2015 (UTC)[reply]

Scanner won't open

My Epson scanner won't open when I click the icon on my computer. I can turn it on but that's it. What could be the problem?--Jeanne Boleyn (talk) 13:24, 17 November 2015 (UTC)[reply]

In technical terms, your scanner is f*cked. Try reinstalling it. If that doesn't work try installing it on a different computer. if that doesn't work, chuck it out. Vespine (talk) 21:51, 17 November 2015 (UTC)[reply]
You can always try unplugging it and plugging it in again. Rebooting the computer may overcome trouble too. Graeme Bartlett (talk) 22:36, 17 November 2015 (UTC)[reply]

Moving data from one secondary SSD to a larger one

I've just purchased a 500GB SSD and I'd like to copy the data from my 240GB SSD (drive letter D currently) to it. Normally I'd just add the drive in but I've actually run out of SATA ports on my motherboard so an upgrade will have to do. Will anything bad happen if I just plug both drives in (unplugging another one temoprarily) and just copy all the data across, then unplug the original and give the new drive the drive letter D? I have installed programs but no OS installed to the drive. 81.138.15.171 (talk) 17:03, 17 November 2015 (UTC)[reply]

Nothing bad should happen (standard disclaimers apply), but if you want to be certain a kit like this (many different ones available, search around for the one that suits you best) makes cloning a disk almost painless. WegianWarrior (talk) 18:45, 17 November 2015 (UTC)[reply]
The installed programs may not appreciate being transferred between drives because of registry issues, and you might have to reinstall them, depending on how the program identifies where it is in the registry. Won't know that until you try it, unfortunately. Also, make sure you turn off your computer before swapping the drives. SATA does support hot-swap in the specification but it depends on your SATA controller as to if it's actually supported. Drive cloning as mentioned above would be a mostly one-click process, but still might have the registry problems. Keyword in all this is 'might.' It also might go completely fine without any problems whatsoever. Obligatory make-sure-to-backup-important-data message here. FrameDrag (talk) 20:27, 17 November 2015 (UTC)[reply]
Another alternative for Windows 7 and above is to back up your existing SSD to another drive with enough free space, swap SSDs, format, and restore. Alternatively, SATA controller add-in cards are dirt cheap. I recently put this $20 card in a PC that didn't have enough sata ports and it works fine. --Guy Macon (talk) 21:40, 17 November 2015 (UTC)[reply]
Transfering the whole windows from the 240 GB to the EMPTY 500 GB SSD requeres to delete all data on the 500 GB target device. Connect the two drives to the computer. Boot a linux live CD. Use gParted to indentify the drives only. Use dd (Unix) to copy the 240 GB to the 500 GB SSD. Shutdown after complete to reload partition tables on next boot up. Booting windows, remove the 240 GB drive before! In the disk mananger, fomerly called windisk select the remaining space and the system parition by holding the CTRL-key rightclick and choose expand. Theres no reboot neccessary but you need the rights of the windows administrator. When having more custom partitions on the drive, reboot liunx, use gparted, to move and expand the partitions. When booting windows, keep the old SSD far away from the system. You might have to register your windows copy again. Note DD is a commandline tool. Aware to copy accidently the target to the source, deleting all your data! --Hans Haase (有问题吗) 11:46, 21 November 2015 (UTC)[reply]
I would just use Clonezilla. I've used it many times and have never had problems. It will copy everything from the 240 GB drive to the 500 GB drive and clear anything off the 500 GB drive for you. The drive letters will stay the same. The only thing you'll have to do after running it is expand the partition to fill up the rest of the space on the 500 GB drive in the Windows Disk Management console. You just plug both drives into your motherboard, boot from the Clonezilla ISO, and follow the prompts.—Best Dog Ever (talk) 11:58, 21 November 2015 (UTC)[reply]

Linux hard disk questions

Because of circumstances beyond my control, I no longer have the right to use the Windows drive that was in the computer my company gave me. I can keep the actual computer though, so I have physically removed the drive. Fedora 20 Linux boots up happily without it, but I still see "Windows Boot Manager" in the boot menu even though the computer is now 100% Bill-free. How do I get rid of it?

This has also given me an empty drive bay. Assuming I buy another hard drive, can I somehow have one partition, or one mount point, span both drives, part of one and the total of another, or part of one and part of another? I think I must have asked this question earlier but I have forgot the answers. JIP | Talk 18:08, 17 November 2015 (UTC)[reply]

The issue will be what boot manager you are running. If it is grub2 (F20 default), you can see the grub.conf file usually in /boot/grub2/grun.conf. It looks like a mess, but it is rather easy to find each kernel that it will load. You can delete the one you don't want. I know that there are tools to edit the grub.conf file, but I don't use them, so I cannot explain how to use them. As for spanning multiple disks, that is what LVM is used for. Again, this is default for F20. Both disks become a volume that you, the user, sees as a single drive. My suggestion is to backup all the files you really want to keep. Then, put in a second drive. Then, install the latest Fedora from disk and tell it to completely format and reuse all drive space. You'll quickly end up with a single logical volume over two disks and the boot manager will be cleaned up. 209.149.115.177 (talk) 18:37, 17 November 2015 (UTC)[reply]
I think that when I fill up my current drive, I can buy another two drives and keep the old one as a backup. I can then install the latest Fedora release on the new drives and use LVM on them. Is it possible to have LVM only use part of one drive and the whole of another? How is it controlled which files end up on which drive if they're the same logical partition? Or will the files themselves be spread out across the drives, with part of a file lying on one drive and part on another? What happens if I remove one drive? Can I still access the remaining drive's files or will the whole LVM system fail? If I migrate the drives themselves to another computer, is it enough to plug them both in somewhere? Do I have to boot the computer from the LVM drives for LVM to work or will it somehow automatically work even if I boot from a normal singular drive? JIP | Talk 19:13, 17 November 2015 (UTC)[reply]
LVM merges partitions into single logical volume. You don't really have much control over where files go (there are advanced LVM settings, but I don't play with them). So, you can partition a single drive into two partitions and only use one of those partitions in a logical volume. The cool thing about LVM is that you can remove and add partitions whenever you like. I had a disk that was reporting too many bad sectors. I removed it from the LVM (which took time, but automatically removed all data from the failing drive). Then, I added a new drive to the LVM. My file system was, from my point of view, the same (technically bigger). I also solved a problem we had using LVM. We had a JBOD with 24TB of disk space in it. We kept getting file corruption. After a hell of a lot of research, I highly suspected the size of the disks. It reported as a single 24TB partition. So, I broke it into 6 4TB partitions on the JBOD side. I didn't lose much data storage - just a very minor amount for the partition overhead. Then, on the server side, I merged the 6 partitions into a single logical volume. After a year of monthly (or more often) file corruption, we ran for the next two years without a single corrupt file. The users still saw the 24TB "disk", which was actually a volume of 6 partitions. 209.149.115.177 (talk) 19:45, 17 November 2015 (UTC)[reply]
So, as I understand it, LVM divides files between the drives but doesn't actually spread out individual files, and I don't get to choose what files go where. But I didn't really see a reply to my other questions. Is an LVM drive or partition only accessible through LVM, or can it be mounted as a standard drive or partition and whatever files there are can be accessed? Do LVM drives or partitions automatically know they're under LVM and so all I have to do is to plug them in and they're accessible through LVM? How do I actually add or remove a drive in LVM? Does a drive under LVM obey the normal file system structure or does LVM have its own file system? Is an LVM drive accessible if I don't plug in all of the drives under LVM? In particular, I want to know whether an LVM drive can be accessed without LVM, and whether installing LVM drives to a new system is as simple as just plugging the drives in. JIP | Talk 20:15, 17 November 2015 (UTC)[reply]
LVM is part of Fedora. You may already be using it. Run "sudo lvdisplay" to see what logical volumes you have. I only have one drive in my computer I'm using right now, but it has a logical volume wrapping the single user partition on that drive. The volume is what the OS sees as a "drive". You can format the volume with whatever filesystem you like. It is just a drive. All in all, it is another level of abstraction. You have the physical disk. Above that, you have partitions. Each partition is a separate "drive" even though they are all one physical disk. Volumes treat multiple partitions as single drives. To get technical, it gets down to block mapping. Block 82 of my logical volume will be mapped to, say Block 129 of one of the partitions in the volume. But, that partition is part of a physical drive. Block 129 of the partition may be block 2031 on the physical drive. You (and your files) don't know about the indirection. They just know about the volume because the filesystem resides on the volume. So, assume you are using something simple like a file allocation table based filesystem. You have a file allocated to blocks 153 to 187. Those are logical volume blocks. You don't control which partition owns those blocks or what block they are on the partition. You can do advanced settings, but I don't mess with that. As for messing with volumes, you use the lv* commands (I'm sure there's a GUI also). There is lvcreate, lvdisplay, lvextend, lvremove, lvresize -- just off the top of my head. I'm sure there are more lv commands. In the end, you won't notice you are using logical volumes until you decide to mess with the volumes. Otherwise, they are just drives built into the Fedora's file management system. 209.149.115.177 (talk) 20:49, 17 November 2015 (UTC)[reply]
I'm finally getting my head around all this. So LVM is not about actually storing the files, but about accessing them? The files get stored as normal files under normal file systems, but LVM is just there to let me access all of them via a single mount point? JIP | Talk 21:07, 17 November 2015 (UTC)[reply]
That is mostly correct. When you merge a partition into an volume, the partition will be formatted by the logical volume manager to be part of the volume. You can't take a drive out of a logical volume and slap it into another computer. It no longer is independent. It is a dedicated part of a volume. This is very similar to RAID. If a partition is part of a RAID 5 volume, you cannot take a single drive out of it and use it elsewhere. So, it is not like taking a bunch of independent drives that work fine by themselves and putting them under one mount point. It is merging the filesystem of a bunch of drives so they are dependent on one another. As such, they become a single mount point. 75.139.70.50 (talk) 23:30, 17 November 2015 (UTC)[reply]
So I have to have both drives plugged in for them to be usable. If I migrate the drives to a new system, is it enough to just plug them both in, providing the host system supports LVM, or do I have to actually boot from the LVM drives or do some magical LVM configuration first? JIP | Talk 18:40, 18 November 2015 (UTC)[reply]
A logical volume expects to have all partitions at all times. You can move it to another machine. You unmount it (obviously). Then, you mark it inactive and export it. That will make the partitions movable. Once it is on a new machine, you import the partitions, mark the logical volume active, and mount it. The pertinent commands to read up on are vgchange (to make groups active/inactive) and vgexport/vgimport. 75.139.70.50 (talk) 16:53, 19 November 2015 (UTC)[reply]
I'm fairly sure I use grub2, because it's the F20 default. I don't remember choosing any alternative boot manager. I have a /boot/grub2 directory but there is no grub.conf file there. locate grub.conf didn't find it either. JIP | Talk 19:16, 17 November 2015 (UTC)[reply]
Mine is in /boot/grub2/grub.cfg - I think I typo'd with grub.conf earlier. 209.149.115.177 (talk) 19:45, 17 November 2015 (UTC)[reply]
Mine is in /boot/efi/EFI/fedora/grub.cfg; it depends on whether your machine uses UEFI or not. --70.49.170.168 (talk) 19:50, 17 November 2015 (UTC)[reply]
I did find /boot/efi/EFI/fedora/grub.cfg, and there is a line in it for "Windows Boot Manager". But at the last moment, I chickened out from editing the file. If I make any errors, could it cause grub2 to fail and make my system unbootable, and therefore pretty much unusable? JIP | Talk 20:26, 17 November 2015 (UTC)[reply]
I don't use a GUI or special "helpers". I Googled and "grubby" appears to a tool Fedora has to alter the grub.cfg file without manually editing it. 75.139.70.50 (talk) 23:30, 17 November 2015 (UTC)[reply]

Open alternatives to Kindle

As I understand it Kindle is very snoopy and DRMy. Is there an alternative? Or can I jail-break a Kindle?
All the best: Rich Farmbrough, 20:53, 17 November 2015 (UTC).[reply]

Defective by Design's list of DRM-free ebook suppliers is here. -- Finlay McWalterTalk 23:18, 17 November 2015 (UTC)[reply]
Kindle is a tablet. There are many tablets on the market that aren't locked down by Amazon. Many are cheaper than a Kindle. If you are talking about the Kindle Reader app. You can get many books in PDF form and read them with any PDF reader you like. If you are talking about Kindle books, there are programs that you can use to convert those to PDF documents. It isn't the easiest thing to do, but it exists. 75.139.70.50 (talk) 23:32, 17 November 2015 (UTC)[reply]
Try one of the Kobo eReaders. Instructions for Building The Kobo Reader Sources are here. --Guy Macon (talk) 00:07, 18 November 2015 (UTC)[reply]
Fairly sure the OP is referring to the Amazon Kindle eReader devices, which aren't tablets under most common definitions. And there aren't that many eink/epaper or eink/epaper like devices any more (well there never was but they more common than tablets at one time). There is however more than the Kindle. Kobo is the most obvious example, but there's also the Nook, a bunch of Hanvon devices and others. The Mobiread forums [3] and wiki [4] are a good source of information for such devices. There are even some with Android, but I'm not sure if it's really a good choice. Amazon does seem to be going down the route of Apple, and making it difficult for users to get full control over their devices, e.g. [5] [6].

Note however, depending on precisely which version you're looking at (including ad supported or not), where you live and whether Amazon is having one of their regular specials, you'll probably find most of these devices are at best comparable in price to Amazon devices, not cheaper.

PDF is a fairly bad format for most fiction eBooks since it's designed for fixed-layout which is unnecessary and means the book generally only works well on certain screen sizes. Likewise the ability to change font size etc (a big advantage with ebooks) is limited. PDF should only really be used for ebooks when fixed-layout is needed such as with text books, picture books, journal articles etc, although even there there are alternatives for fixed layout.

EPUB which is an ISO standard is probably the only common standard ebook format that's somewhat open and used by pretty much everyone other than Amazon. (The latest versions also support fixed-layout and may be where picture books and possibly text books will move. Although as I understand it epub 3 is very complicated format, hence why it's taken so long for even the full featured commercial rendering engine.)

However most ebooks in ePub format from commercial sources have DRM. In some countries you can legally remove this DRM if you own a licence for the book. OTOH in some countries such as the US with DMCA or similar laws with anti-circumvention clauses, you're potentially breaking the law (as in criminal matter rather than a civil matter between you and the copyright owner) if you do so for most purposes regardless of what licence you have (enabling the book to work with text to speech tools may be one exception in the US, but I'm not certain).

If you're able to remove the DRM, it's relatively easy on a computer for a number of DRM formats. This BTW includes the DRM Amazon uses in the Kindle books. Also I believe Apple FairPlay DRM (never done it myself but what I've read suggests it's possible). And perhaps the key one is Adobe Digital Editions DRM which many epub vendors including I believe Google Play, Barnes & Noble and Kobo support for transferring to other ereaders (their own devices sometimes use different DRM).

While the popular library management application Calibre refuses to official support DRM-removal in any way, there are plugins that can be added that after set-up mean you can basically just import the book like normal (not sure about Apple). Calibre BTW also has fairly good automatic conversion between ebook formats. It can even often fix ebooks which are unnecessarily PDFs (or otherwise fixed layout) although your best bet is to avoid them. (Calibre can also handle TXT, saved HTML, RTF, DOC, older formats like LIT, actually pretty much anything you're likely to encounter. Albeit with the possibility some formatting may be lost, or you may get some weird stuff on particularly poorly made sources.) So with some minor technical competence, actually dealing with DRMed ebooks isn't generally that different from non DRMed ones. And likewise you may not have to care whether your books are from Amazon or someone else, which may not match your device. (If you're in the US, this may be an advantage because from what I've read Amazon often has the cheapest ebooks, although as mentioned if you're in the US the DMCA may cause problems.)

Note that most commercially sourced PDF eBooks are DRM protected too, so it's not like getting PDFs allays DRM concerns. Also most ereaders including Amazon's do support DRM free ebooks in the formats they support for DRMed ebooks, if you get them from somewhere (be it originally like that or with the DRM removed). Obviously with Amazon their lack of support for epub means conversions may be necessary (although as mentioned is generally easy). However the Kindle devices are popular enough that MOBI and KF8 books are also very common. E.g. Humble Bundle DRM-free fiction ebooks generally provide both Mobi and EPUB, and possibly PDF [7] [8].

BTW, if you do have a tablet, there are plenty of ereader apps on both Android and Apple Store, and even Windows Store, which support proper ebook formats like epub or Mobi (probably both if they aren't vendor apps). Likewise on most laptop/desktop OSes. So there's never any real reason to use a fixed layout format like PDF for ebooks, where it isn't needed by the book.

Nil Einne (talk) 12:45, 18 November 2015 (UTC)[reply]

While there is some perception that the Kindle is closed, that's not quite true. Sure, it only supports DRMed books from Amazon. But it does understand formats like PDF (which sucks for eBooks) and mobi, which is basically the same as (one version of) Amazon's AMZ. Open source calibre (software) can convert many other formats, including EPUB, and you can put books onto the Kindle via USB, either directly (it mounts as a USB disk) or via calibre. So if you don't want to use Amazon, you can put it into Airplane mode (or just never join a WiFi network) and treat it as an unconnected device for all non-DRMed books, including most of Project Gutenberg and many many commercial ebooks. --Stephan Schulz (talk) 19:09, 20 November 2015 (UTC)[reply]

November 18

Computer systems that handle leap seconds honestly?

As noted in our article, Unix time (the well-known seconds-since-1970 scheme) is "no[t] a true representation of UTC". This is because it makes no formal provision for leap seconds, and in fact some rather dreadful kludges are necessary at midnight on leap second days in order to handle leap seconds at all (and there have also been some rather dreadful bugs).

My question is, does anyone know of an operating system (mainstream or not) that is able to handle UTC leap seconds up-front and properly? By "up-front and properly" I mean that

  1. the kernel-level clock runs for 86,401 true seconds on a leap-second day (analogously to the way a true and proper calendar runs for 29 whole days in February during leap years), without having to do anything kludgey
  2. a user-level program that prints the time of day (perhaps after fetching a time_t value from the OS, perhaps after using a C library function like ctime or localtime) will actually print a time like "23:59:60" (as illustrated on our leap second page)

In terms of the well-known, mainstream operating systems, as far as I know, all versions of Unix, Linux, and Microsoft Windows fail both of these tests. (I'm not sure about MacOS.) —Steve Summit (talk) 01:42, 18 November 2015 (UTC)[reply]

The tz database has time zones under right/ that count leap seconds, and I think that you can get a value of 60 in tm_sec (which is permitted by POSIX) if you use one of those. But actually using them appears to violate POSIX. -- BenRG (talk) 04:14, 18 November 2015 (UTC)[reply]
I believe the traditional Operating systems of the IBM zEnterprise System (the successor of System 370), such as z/OS and z/VM, correctly handle leap seconds in the operating system, if set up to do so; the hardware certainly supports correct handling. According to this z/OS spins during the leap second, so it isn't full support. I don't know about the open operating systems that have been adapted to run on zEnterprise System. I also don't know the extent to which the leap second support has filtered all the way down to application programming languages. I did find a PL/1 manual online (a language more or less exclusive to IBM); I'll report back on whether that seems to support leap seconds. Jc3s5h (talk) 15:22, 18 November 2015 (UTC)[reply]
Thanks! (Just the sort of thing I'm looking for.) —Steve Summit (talk) 15:52, 18 November 2015 (UTC)[reply]
If that's all you want (spinning during the leap second rather than following the UTC spec), any POSIX system will act the same way if the system it runs on handles leap seconds using kernel clock discipline. See [ http://www.madore.org/~david/computers/unix-leap-seconds.html ].
Guy, please, if you don't know what I'm looking for or don't want to help, then don't. If you think I'm attacking Posix I'm not; if you think I'm a dangerous heretic for even asking these questions then, please, take it to my talk page.
Although the IBM systems Jc3s5h cited may not pass my second test, it looks like they do pass my first, so they are the sort of (not necessarily the exact) thing I'm looking for. —Steve Summit (talk) 16:14, 18 November 2015 (UTC)[reply]
Why the personal comments? I am giving you correct technical information and attempting to correct what appears to be a misunderstanding on your part. Both IBM and POSIX count up from an epoch, ignoring leap seconds. The fact that one counts seconds since 1970 and the other counts 0.244 ns units since 1900 is immaterial. Both systems are equal as far as meeting your tests, but only if the POSIX system uses kernel clock discipline. If you have some sort of problem with POSIX that makes you insist that two systems act differently when they actually act the same, just let me know and I won't bother posting any further correct technical information in response to your questions. --Guy Macon (talk) 16:25, 18 November 2015 (UTC)[reply]
If you read the reference Jc3s5h posted, the implication (though I can't be 100% sure of this) is that the systems in question are keeping TAI internally. They may not maintain a complete history of when historical leap seconds occurred, but they do maintain a current notion of the TAI-UTC offset, so that this offset can be applied when returning UTC timestamps to user processes. So, actually, it appears to be a rather different solution, not "the same" as Posix at all (although it may end up appearing that way to a user process, after all). —Steve Summit (talk) 18:02, 18 November 2015 (UTC)[reply]
Perusal of the manual I mentioned earlier indicates that the only time and date format with a thorough description is the Lilian date. The given example makes it evident that leap seconds are ignored. The other source I mentioned, http://www-01.ibm.com/support/docview.wss?uid=tss1wp102081&aid=1 has the person setting up the system enter the leap second offset at the time of setup, and schedule leap second insertions thereafter. So there is no concept of maintaining a history of all the leap seconds that have ever happened. So it appears that in z/OS supporting leap seconds means not crashing. Historical leap seconds are treated as if they did not exist. I'm not sure the extent to which the most recent leap second is supported; I wonder if there is any function that would report June 30, 2015, as 61 seconds long. Jc3s5h (talk) 16:12, 18 November 2015 (UTC)[reply]
Ah, well. Thanks much for your research.
To clarify (for people like Guy who seem to misunderstand what I'm asking): I am not looking for an out-of-the-box operating system to install on my PC at home that handles leap seconds to my satisfaction. I'm looking for history: how have other systems handled leap seconds, how well did it work, and what can I learn from the attempts? If I have my own ideas for handling leap seconds, have they been thought of and tried before, and what was the experience? —Steve Summit (talk) 16:19, 18 November 2015 (UTC)[reply]
"People like Guy?" I find that comment to be rather insulting. I understood from the start that you are looking for a theoretical discussion about how other systems handled leap seconds. There was nothing unclear about your original question. This is a well-known problem in computer science.
To meet your tests for past dates you need to keep a record of all past leap seconds and provide a reliable method for updating that record when new leap seconds are announced. Once you have that it is simple arithmetic. To meet your tests for future dates is theoretically impossible. You cannot tell us how many seconds will elapse between midnight tonight and midnight on this day 50 years from today.
Many (most?) experts appear to agree that the best practical system is an incrementing counter that ignores leap seconds with the increment frequency modified by kernel clock discipline. (See reference I have given you twice already). --Guy Macon (talk) 16:49, 18 November 2015 (UTC)[reply]
Okay. So (ignoring what the experts appear to agree is best or practical), let's make a list:
  1. Keep a monotonically-incrementing counter, in units of seconds, since an epoch. Define 86400 secs/day. Tinker on leap-second days to taste.
  2. Keep a monotonically-incrementing counter, in units of seconds, since an epoch. Manually insert seconds into the counter on leap second days.
  3. ______ ______ ___________ __ _ _____ ________ ____.
  4. __ ______ __ ____ ___________ __ _ __ ____ _______.
Number 1, of course, is the Posix approach. #2 is (an approximation of?) the IBM mainframe approach mentioned above. Please help me fill in 3, 4, and perhaps more. —Steve Summit (talk) 17:12, 18 November 2015 (UTC)[reply]

It depends on what you mean by "Manually insert seconds into the counter" What was described for (some) IBM systems does not change the counter in any way - it remains monotonically-incrementing and every minute is exactly 60 seconds long, including minutes with leap seconds.

What you call the IBM mainframe approach (despite it being available on POSIX or pretty much any other time/date scheme) is not inserting seconds into the counter, but rather slowing down nor speeding up the rate at which the hardware clock increments until the ignores-leap-seconds counter agrees with UTC.

In other words, the definition of "second" used by the computer changes for some short period of time so that it no longer equals "the duration of 9192631770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom."

UTC, on the other hand, never varies from the caesium 133 standard, and every second in UTC without exception is exactly 9192631770 periods. UTC handles this be actually inserting or deleting seconds. In other words, instead of redefining how long a second is UTC redefines (for a short time) how many seconds are in a minute.

Both techniques have advantages and disadvantages, but my point is that they way IBM handles times and dates is essentially the same as the way POSIX handles times and dates. The only difference is when the counter starts and how big each increment is. I cannot emphasize this enough. Any computer that ignores leap seconds can be synchronized with UTC by slowing down or speeding up the hardware clock that feeds the time/date counter. The first two schemes listed are the same scheme.

Steve, could you please read the two references I provided and comment on them? I don't know for sure that you haven't read them, but I do know that you keep referring to what was described for IBM mainframes above without ever showing any indication that you are aware that my first reference explains the same technique with a lot more detail and explains exactly how some POSIX systems use the technique. --Guy Macon (talk) 00:52, 19 November 2015 (UTC)[reply]

Yes, I have read the two references you posted (and the one Jc3s5h posted). I thanked you for yours in this edit. Lemme see, I believe I still have all three open in various browser tabs.
My takeaways:
  • This one says that leap seconds don't matter that much to most people, and we should stop stressing about them so much. I'm intrigued to learn Linus thinks this, but I believe that they're a big enough problem that enough people are worried about (and our current solutions are still sufficiently inadequate) that more work is needed.
  • This one does a good job of summarizing most of the issues, including the fundamental inability of a Posix time_t to accurately represent a leap second. ("By reducing UTC to a single number, we have an ambiguity in the actual instant being referred".) This page also suggests that one piece of a better solution might involve reporting leap seconds to user space unambiguously (but mostly compatibly) using deliberately nonnormalized struct timespec values. This is a simply marvelous idea which I am delighted to learn of (and I have written an email to its author thanking him for it).
  • Finally, from this one I take away that these IBM mainframes keep something close to TAI internally, and maintain the current value of UTC-TAI (i.e., the current leap second count) in a kernel register so that they can add it in when they deliver UTC to user processes. I think (though you seem to disagree with me here) that this is different enough from the typical Posix scheme to be interesting. —Steve Summit (talk) 04:12, 19 November 2015 (UTC)[reply]

A bit more information from someone who posted to my talk page:

  • The "Future of UTC" colloquium has, naturally enough, dealt with many/most/all of the issues that have come up in this thread (and then some).
  • This page describes a set of techniques for synching your NTP server and the rest of your computers to GPS time, and then using a modification of the "right" tzinfo files (mentioned earlier in this thread by BenRG) to convert to UTC.

Steve Summit (talk) 05:21, 19 November 2015 (UTC)[reply]

Those are really good. Thanks! --Guy Macon (talk) 06:33, 19 November 2015 (UTC)[reply]

it's complicated...

It is a lot more complicated than simply defining one possible behavior as "handling leap seconds honestly" when the defined behavior implies that when doing a conversion from seconds_since_epoch value to a time in hours/minutes/seconds the computer has to give one of two possible hours/minutes/seconds answers for a particular seconds_since_epoch value and then one second later convert the exact same seconds_since_epoch value and convert it into a different hours/minutes/seconds value -- and the conversion has to (by psychic powers?) pick the right answer even converting a stored seconds_since_epoch value. See [ http://www.madore.org/~david/computers/unix-leap-seconds.html ] Also, this definition of "handling leap seconds honestly" is completely unable to handle them "honestly" for dates and times a few years in the future because we have no idea when leap seconds will be added or subtracted. See [ http://www.wired.com/2015/01/torvalds_leapsecond/ ]. --Guy Macon (talk) 12:38, 18 November 2015 (UTC)[reply]
By "honestly" I simply meant, "in accordance with the definition of UTC", which is that some days have 86,401 seconds in them, an occurrence which the Posix definition of time_t simply cannot handle.
Yes, it is complicated. But it would be a lot less complicated if people would stop assuming that it is written in stone somewhere that the best and the only way of representing time is as seconds since the epoch. That is a notion which was invented by and is convenient for computer programmers, but it is a notion which has now so constricted our thinking that we are on the verge of abandoning leap seconds because it looks like they're practically impossible to handle correctly.
(I haven't read Thanks very much for the two references you cited, though. Very useful. but I mostly know what they're going to say, because I've read plenty of others, and I do understand the objections to trying to handle leap seconds by other than the current ghastly kludges.)
Here's a thought experiment to help move the argument past the but-if-we-honor-leap-seconds-we-can't-even-compute-the-difference-between-two-timestamps-without-consulting-an-external-table concerns. This year my birthday fell on October 2 = 1443744000 UTC. Suppose I want to know when my birthday will be next year. I compute 1443744000 + 365 × 24 × 60 × 60 = 1475280000. Converting that from UTC back to a calendar date I get... October 1! Wait a minute, what went wrong? —Steve Summit (talk) 13:13, 18 November 2015 (UTC) [edited 15:29, 18 November 2015 (UTC)][reply]
But really, I don't want to get in a long argument here about whether leap seconds are a good idea or not, or how hard they are to handle. The question I asked in the section above and am looking for answers to is simply, have systems handled them by other than the Posix definition? (Feel free to take this to my talk page if you think I'm being too heretical by even asking the question.) —Steve Summit (talk) 13:19, 18 November 2015 (UTC)[reply]
Non-POSIX time and date? There are many such systems.
The Time of Day Clock on the S370/zSeries IBM mainframe is a 64-bit count of 2^−12 microsecond (0.244 ns) units starting at 1 January 1900.
In MS-DOS, The date timestamps stored in FAT are in this format:
  • 7 bits: binary starting from 1980 (0-119) (years 1980 to 2099)
  • 4 bits: binary month number (1-12)
  • 5 bits: binary day number (1-31)
  • 5 bits: binary number of hours (0-23)
  • 6 bits: binary number of minutes (0-59)
  • 5 bits: binary number of two-second periods (0-29) (0 to 58 seconds)
--Guy Macon (talk) 15:28, 18 November 2015 (UTC)[reply]
The problem with Steve's thought experiment is that birthdays are usually commemorated on the same calendar date each year. Since 2016 is a leap year, one must add 366 days, not 365. One of the problems with all the time/date scales is that we're missing one: observe true UTC on the date the leap second occurs, but treat all days in the past or future as containing 86,400 seconds. For any historical record that was recorded in true UTC, or any future event scheduled in 86,400 second days but which must start according to a clock that displays UTC, the people involved are officially on their own and should do what they think is right. We have no name for this concept.
This concept is really not much different than how the law treats time in most situations. The people involved in an event record the time using whatever equipment and procedures they think are appropriate, and if they later disagree and can't settle it among themselves, they go to court and get a decision that applies to that one particular situation. Jc3s5h (talk) 17:14, 18 November 2015 (UTC)[reply]
To expand on the above, for some people you need to add 1460 days to get to the next birthday. This is a major plot point in The Pirates of Penzance; Frederic was indentured until his 21st birthday, but was born on 29 February and thus will be released when he is in his 80s. Leap years, like leap seconds, case all sorts of complications. I say we should simply build huge rocket engines on the equator and adjust the rate at which the earth rotates. --Guy Macon (talk) 01:02, 19 November 2015 (UTC)[reply]
You need to credit Larry Niven, for One Face --Trovatore (talk) 01:15, 19 November 2015 (UTC) But nice finessing of the issue of whether Gilbert realized 1900 was not a leap year! --Trovatore (talk) 01:25, 19 November 2015 (UTC) [reply]

Guy Macon's take on z/Enterprise System seems to account for only one of two possible ways it can be set up. If I understand Leap Seconds and Server Time Protocol (STP) correctly, installations that don't need accurate time around the time of a leap second ("accurate" meaning an error of well under a second) can just set the number of accumulated leap seconds to 0 and not schedule any leap second insertions. When the external time source starts reporting a time one second different from the time-of-day (TOD) clock, the TOD will be steered to agree with they external source. An installation that needs all recorded times to be accurate can set the accumulated leap seconds and schedule leap seconds. When this is done, the description of the machine instructions and TOD clock contained in the z System Principles of Operation will be correct; the time of day clock will contain time steps since the IBM standard epoch, 0 AM January 1, 1900 UTC [sic; it's really extrapolated back from 1 January 1972 UTC with no leap seconds between 1900 and 1972]. When used in the more accurate setup, the accumulated leap seconds are applied to the TOD clock before comparing to the external time source, so the TOD clock will only be steered for real clock errors, such as the crystal in the TOD clock having a slightly different frequency than the atomic clocks that control the external time source.

If the system is set up in the more accurate mode, a machine instruction is available to report the value of the TOD clock; when converted to a number of seconds and added to 0 AM 1 January 1900, this will give the reading on a time scale that agrees with UTC on 1 January 1972, and counts all leap seconds. There is another form of the machine instruction that will report the UTC time of day or the local time of day.

So I infer the way the accurate setup works is that it can record the actual UTC time of day of any event to a sub-second accuracy, even immediately before or after a leap second. But since z/OS spins during the leap second, the system thinks that no events occurred during the leap second. For example, there would have been no need to record an event as 30 June 2015 23:59:59.5 UTC because as far as the system is concerned, no event occurred at that time. Jc3s5h (talk) 15:41, 19 November 2015 (UTC)[reply]

Thanks! I had missed the second method. Good information.
So far all leap seconds have been positive (added seconds). In the case of a negative leap second (perhaps someone overdoes the rockets on the equator...) the "less accurate" technique seems like it wold work just fine (steering to the correct time provided by the external time source at a rate of approximately 1 second per 7 hours works either way without disruption) but I am not sure I fully understand the "more accurate" technique in the case of a negative leap second. Am I correct in assuming that it simply has a single minute that is 59 seconds long? --Guy Macon (talk) 17:40, 19 November 2015 (UTC)[reply]
The sources I've found don't go into enough detail to say what would happen in the case of a negative leap second. My guess would be that the TOD clock would just keep counting as it always does, perhaps even if the system is otherwise powered down. I would need to think about it for a bit to figure out if there would be any need to spin the OS to prevent two different TOD readings that have the same broken-down UTC date/time, or two different UTC date/times that have the same TOD reading. Jc3s5h (talk) 18:39, 19 November 2015 (UTC)[reply]
As far as a running process is concerned, a negative leap second is not much different than being swapped out for a second. As far as a kernel is concerned, it's not much different from a virtual machine saving its state to a file, shipping that file across the net to some other VM host, and resuming that VM a second or more later. Steven L Allen (talk) 18:58, 19 November 2015 (UTC)[reply]
I've done a few calculations. Imagine there is a negative leap second at the end of 2016. A z/Enterprise System with the more accurate setup would report at 23:59:58.99999 Dec. 30 2016 that the TOD clock reads 3,692,217,624,999,990 microseconds. Ten microseconds later the TOD clock would read 3,692,217,625,000,000 which would be reported as 00:00:00 Jan. 1, 2015 UTC. So as far as a running process was concerned, as long as it was only looking at the unconverted value of the TOD clock, nothing much happened except the passage of 10 microseconds. Only if the process issued a machine instruction demanding the time of day in UTC would anything out of the ordinary happen, namely that the second named 23:59:59 never happened. Jc3s5h (talk) 21:38, 19 November 2015 (UTC)[reply]

Windows 7

Can I copy my disk version of windows 7 to another drive in case my first drive fails? --31.55.64.160 (talk) 02:52, 18 November 2015 (UTC)[reply]

Sure, if you use Disk cloning software like Clonezilla. FrameDrag (talk) 12:55, 18 November 2015 (UTC)[reply]
What about using the inbuilt backup feature in W7? That says it creates an iso image of the os. But can you use this image to reinstall Windows onto a clean disk?31.55.64.110 (talk) 21:00, 18 November 2015 (UTC)[reply]
It works perfectly and does exactly what you want. The problem is that you need a third hard disk (you can't save to or restore from the same disk) and it has to have enough free space. If you have that, Windows backup works great. I have done this several times. It may take a couple of days to do the save and a couple more for the restore if you are talking about a full 4TB drive, but in the case of smaller/less full drives it will be a lot faster. --Guy Macon (talk) 01:10, 19 November 2015 (UTC)[reply]
Can you explain further about having to use 3 hard disks? I cant grasp what you are saying.--178.104.65.199 (talk) 16:37, 19 November 2015 (UTC)[reply]
Let's say you have two hard disks, which I will call "OLD" and "NEW". You have Windows 7 on OLD and you want to put Windows 7 on NEW. You could use the disk cloning software that FrameDrag mentioned above, but you want to do it using windows backup. So your first step is to back up OLD. Where do you plan on putting the backup file? You can't back up OLD on OLD - Windows backup doesn't allow that. So you put it on NEW -- your only remaining choice. Then you find that you cannot restore a backup file on NEW to NEW - Windows backup doesn't allow that. So you can't put the backup file on OLD and you can't put the backup file on NEW. Where do you put it? --Guy Macon (talk) 18:05, 19 November 2015 (UTC)[reply]

How do I create a website with an unusual URL extension?

I see a lot of weird URL extensions on this website: https://iwantmyname.com/domains/new-gtld-domain-extensions I'm just not 100% sure what's the process behind these unusual URLs as opposed to just using dot com. Are they free for everyone to use, or did this company somehow gain exclusive rights to them? Can I buy a website through, say, WordPress or GoDaddy and just use a weird suffix? 2605:6000:EDC9:7B00:8C5C:47A8:2805:69A7 (talk) 03:59, 18 November 2015 (UTC)[reply]

The "URL extensions" are top-level domains. Ultimately ICANN decides what TLDs can exist. They've chosen to do this by soliciting proposals from private parties, with a $185,000 application fee, and with the submitting party getting control of the domain if it's approved. The benefit for ICANN appears to be that they make tons of money (more than 1000 applications according to this, so over $185 million in fees). The benefit for winners is that they get to collect fees from registrants; it's probably especially lucrative for broad domains like .biz or .app where many companies will feel they have to register their trademarks just to protect them. The benefit to everyone else is unclear. But yes, you can buy subdomains of the new TLDs, from the controlling entity or a reseller, for a price ranging from cheap to thousands of dollars. -- BenRG (talk) 05:51, 18 November 2015 (UTC)[reply]
To answer the "Can I buy a website through, say, WordPress or GoDaddy and just use a weird suffix?" question, if you set up your own server you can use any URL you want, but good luck convincing the rest of the internet to connect to it. WordPress and GoDaddy will only sell you domains that the rest of the internet has already agreed to connect to, like .com or .biz.
iwantmyname.com is almost certainly a scam. They are advertising domains like .army that they have no rights to and never will. Stick with a normal internet provider (I personally like pair.com). --Guy Macon (talk) 15:39, 18 November 2015 (UTC)[reply]
I agree. "We recommend pre-ording" is a bit of a giveaway.--Shantavira|feed me 08:36, 19 November 2015 (UTC)[reply]
I can't comment on whether the site is a scam but "We recommend pre-ording" if we ignore the typo is something many legitimate domain name sellers are pushing, as BenRG has said partly because many are pushing people to get their name before someone else does. It may be a bit dodgy, but isn't a clear indication the site is a scam.

I also find Guy Macon's comment very confusing. .army (which shouldn't be confused with .mil) appears to be a relatively open TLD [9] (that link itself may be of interest) so it's likely many resellers are able to provide .army domain names. GoDaddy does provide .army domain names [10], as they do for 1207 other TLDs [11] (well I think some of these don't have registration yet). WordPress since they aren't really that involved in the domain process are far more limited [12], that includes missing out on most country code TLDs even those with relatively open policies.

Whether or not people will think it's a good idea, the myriad of gTLDs which now exist are what the rest of the world has agreed to connect to as part of the ICANN process. Here are 4 .army domains names I found from a quick internet (Google in this case) search most people reading this thread should be able to connect to http://www.fail.army/ , https://thejack.army/ , http://www.forexpeace.army/ , http://davids.army/ . Here's a .navy http://www.volleyball.navy , here's a .airforce http://chiptune.airforce/ , here's a .mba http://www.faisal.mba/ , here's a really weird choice for a .cricket http://womensiceskates.cricket/ (may be it's spam or something). Some of those are redirects but you can easily see my link is to a .whatever domain (incidentally, it sounds like .whatever was a gTLD proposal, but I didn't find precisely what happened to it). Or feel free to type it out, or simply search for your own favourite example (.example is however one TLD which should never exist as per our article).

Nil Einne (talk) 18:01, 19 November 2015 (UTC)[reply]

I was clearly mistaken about .army. It is supposed to be limited to defense contractors but http://davids.army/ is not a defense contractor. Thanks for the correction. --Guy Macon (talk) 18:14, 19 November 2015 (UTC)[reply]
Not sure why you think it's supposed to be limited to defence contractors. No where in Demand Media's application do I see anything about limiting it to defence contractor, and I'm not sure how well such a thing translates anyway, does it mean the same thing in Iceland as it does in Russia as it does in China as it does in the US? There was concern over the possibility of confusion with official websites from the US, Australian and Indian governments, but Demand appears to have allayed that concern sufficient for ICANN with their anti abuse policies. Defence contractors and people associated with the .army may have been the target market, but that's a distinct thing. My impression is most of the new gTLDs intentionally had no specific restrictions like that. Our List of Internet top-level domains#ICANN-era generic top-level domains seems to support that view although most if is uncited.

BTW, I wanted to add to the above but got an edit conflict, wordPress will likely work with these domains if you register them somewhere else, albeit with possible teething or other such problems [13]. Here's 2 Wordpress sites under .army domains https://robots.army/ & http://seo.army/partner-view/wordpress/

I did however notice in the .army agreement [14] it does say

While ICANN has encouraged and will continue to encourage universal acceptance of all top-level domain strings across the Internet, certain top-level domain strings may encounter difficulty in acceptance by ISPs and webhosters and/or validation by web applications. Registry Operator shall be responsible for ensuring to its satisfaction the technical feasibility of the TLD string prior to entering into this Agreement.

But I'm not aware any ISPs have intentionally blocked any of the new gTLDs except for the ones like .porn which may primarily provide content illegal in local jurisdictions. Any problems which more innocous domains like .army are probably simply legacy issues that no one bothered to fix (smilar to the WordPress issues that may have cropped up). I don't see any mention of even India blocking it despite their strong concern when it was proposed. And 2 random name servers that I think are Indian seem to look up robots.arm fine.

Nil Einne (talk) 19:07, 19 November 2015 (UTC)[reply]

C \ Assembler oriented questions

Hello, I'm not a C\Assembler programmer, and I ask the question general knowledge only. Thanks in advance:

1. A programmer writes an operating system in C. Would the programmer have any reason to write some of the OS kernel (C) code in one of the assembler languages too? I just wonder if such combination is a common practice (as of the combination of Node.JS and PHP) or if they are even practical...

2. Is the Shell (CLI\GUI) coming right above the OS kernel?

Ben-Yeudith (talk) 16:39, 18 November 2015 (UTC)[reply]

Yes, typically some very low-level and performance-critical parts of the OS are written in assembler. Most C compilers support inline assembler techniques to make it less painful. As for the CLI/GUI sitting right above the OS kernel: Typically there are at least various abstraction layers (represented as libraries). But at least under UNIXy systems, the shell is a normal OS process that is directly managed by the kernel. Of course, you need some kind of terminal to interact with a shell if its used as a UI (as opposed to a script interpreter). --Stephan Schulz (talk) 17:19, 18 November 2015 (UTC)[reply]
1. - There's usually a few places where it's still necessary to write some assembly, including:
  • at the very start of system execution, when the system is mostly uninitialised and even RAM may be unavailable
  • at the entry and exit points of interrupt service procedures and hardware signal handlers, where the normal prerequisites for the C execution environment may not be available
  • in kernel code, to allow use of specific architectural features (chiefly instructions) which do not map well to a general programming language. E.g. you may find the synchronisation code in the kernel uses the architecture's test-and-set instruction (or one of the other types of atomic instructions listed in that article's see-also section) as a building block for higher level synchronisation operations like semaphores.
  • in general (which can mean drivers or application code) you might find a modest amount of assembly code to support hardware-accelerated features like SIMD instructions or cryptographic operations.
OS kernels are almost always implemented (now) in C or C++ with a smattering of assembly as I've described above. In the past kernels have been written in languages like Pascal or Forth. There have been some research projects which aim to write all (or almost all) of the kernel in a more dynamic language like Java (JNode) and C# (Microsoft's Midori), although the time-and-memory sensitive parts usually need language extensions and require the programmer to use only a limited subset of the language in those places. As languages like Java and C# can compile down to pretty decent assembly, this isn't a totally bonkers proposition. Using a truly dynamic language like Python or Javascript in such settings is much less practical, as in these cases much less can be known for sure about the types (and thus memory layouts) of objects right up until execution time. I daresay that someone could chose again to write in a restricted subset of these languages, eschewing the very flexibility and dynamicism that make them effective, and try to write an OS in e.g. asm.js. But I think that if anything has a chance to displace C (and C++, to the extent that it is used, again as a subset) in kernel development, it's more likely to be a language like Go or Rust. -- Finlay McWalterTalk 17:37, 18 November 2015 (UTC)[reply]
2. The shell is just a program. -- Finlay McWalterTalk 17:37, 18 November 2015 (UTC)[reply]
Another way to look at this problem: you have to use machine-code any time your compiler does a poor job abstracting the machine's implementation-details. In principle, your compiler could special-case a construct written in any higher-level language: in fact, this is the case whenever a compiler "built-in" feature is used, such as the vector C extensions built into gcc, or the clang atomic built-ins. These compilers allow you to write operating system primitives, like efficient locking, in pure C without resorting to machine language. You don't need to know the op-code for atomic-increment or vector-multiply on Intel or ARM or AVR... you can just call a special C function that's recognized by the compiler. However, your "pure C" will only compile if your compiler supports these intrinsic features for some specific machine-architecture - so it's somewhat circuitous to call this "platform-portable". Essentially, you're still writing code with some degree of platform-specificity, but using the syntax of a higher-level language.
Nimur (talk) 17:55, 18 November 2015 (UTC)[reply]

If you want to see a minimal working configuration of C and assembler, look at the "Bare Bones" tutorial of the OS Dev Wiki: http://wiki.osdev.org/Bare_Bones OldTimeNESter (talk) 19:49, 18 November 2015 (UTC)[reply]

external hard drive - sleep or not

I got a new external hard drive two weeks ago and it defaults to going to sleep after 30 minutes of inactivity (which I can change). It takes a long time for it to wake up. I know it saves electricity for it to sleep, but does it make it last longer? (I've had way too many external drives fail.) Bubba73 You talkin' to me? 21:56, 18 November 2015 (UTC)[reply]

A lot depends on the design of the hard drive. Both keeping it running and going to sleep cause wear, but different kinds of wear. What I recommend is setting the duration so that in general it goes to sleep when you go to sleep or when you spend all day at work, but stays spinning otherwise. IMO that's the best compromise for maximum like, and it is also the least annoying for the user. --Guy Macon (talk) 01:17, 19 November 2015 (UTC)[reply]
The sleep time can be 10, 15, 30, 45, or 90 minutes, or turned off. The default is 30 minutes (of inactivity, I assume). I generally use it a few times per day. Bubba73 You talkin' to me? 02:23, 19 November 2015 (UTC)[reply]
Well, that limits you. I would just turn off the sleep because computer pauses are so annoying. Experts are users are divided on this issue:
--Guy Macon (talk) 06:51, 19 November 2015 (UTC)[reply]
thanks, I'm not having it not go to sleep. It is too annoying to have to spin up when needed. Also, sometimes programs trying to read a file from it think that something is wrong with the file. Anyhow, it is a 5TB that I got for about $120 after a discount, so if it dies in 4 years instead of 5, it is no big deal, since it is used for backup. Bubba73 You talkin' to me? 00:11, 20 November 2015 (UTC)[reply]
Resolved
Yeah, what little data we have on hard disk drive failure shows us that SMART values like run time, temperature, and spinup-count are poorly correlated with future failures - and so, as Guy Macon's sources note, there isn't strong evidence guiding you what you should do. The evidence does suggest that failures follow a bathtub curve, which means the drive is likely to either fail early (which means it was going to, regardless of what you did) or last a long time (and in the 4 or 5 year timeframe, as you say, the drive will be effectively worthless either way). -- Finlay McWalterTalk 00:37, 20 November 2015 (UTC)[reply]

November 19

Disguised characters?

This morning I came across some disruptive editing involving a user copying and pasting content to "new" titles of existing articles that appear to the naked eye as having the same title. I am assuming that (some) of the characters are different on an encoded level. What are the differences? I'd appreciate some tips on how I could examine them to see the differences for myself in the future. Below are the blue links to our actual articles, and the red links to the identical-seeming page titles that were created:

Thanks--Fuhghettaboutit (talk) 01:33, 19 November 2015 (UTC)[reply]

cf Homoglyph -- Finlay McWalterTalk 01:39, 19 November 2015 (UTC)[reply]
As to comparing the two - ideally there's be some web service, but I'm not aware of one. I wrote this little Python3 script for this job:
#!/usr/bin/python3
# -*- coding: utf-8 -*-

# paste the two strings you want to show into these two variables:
s="The Game (rapper)"
t="Тhе Gаmе (rарреr)"

import unicodedata

for c,d in zip(s,t):
    print ("{:s} {:s} U+{:04x} U+{:04x} {:s} {:30s} {:30s}".format(c,
                                                                   d,
                                                                   ord(c),
                                                                   ord(d),
                                                                   "!" if c!=d  else " ",
                                                                   unicodedata.name(c),
                                                                   unicodedata.name(d),
                                                               ))
which for the first two titles you mentioned will show:
T Т U+0054 U+0422 ! LATIN CAPITAL LETTER T         CYRILLIC CAPITAL LETTER TE    
h h U+0068 U+0068   LATIN SMALL LETTER H           LATIN SMALL LETTER H          
e е U+0065 U+0435 ! LATIN SMALL LETTER E           CYRILLIC SMALL LETTER IE      
    U+0020 U+0020   SPACE                          SPACE                         
G G U+0047 U+0047   LATIN CAPITAL LETTER G         LATIN CAPITAL LETTER G        
a а U+0061 U+0430 ! LATIN SMALL LETTER A           CYRILLIC SMALL LETTER A       
m m U+006d U+006d   LATIN SMALL LETTER M           LATIN SMALL LETTER M          
e е U+0065 U+0435 ! LATIN SMALL LETTER E           CYRILLIC SMALL LETTER IE      
    U+0020 U+0020   SPACE                          SPACE                         
( ( U+0028 U+0028   LEFT PARENTHESIS               LEFT PARENTHESIS              
r r U+0072 U+0072   LATIN SMALL LETTER R           LATIN SMALL LETTER R          
a а U+0061 U+0430 ! LATIN SMALL LETTER A           CYRILLIC SMALL LETTER A       
p р U+0070 U+0440 ! LATIN SMALL LETTER P           CYRILLIC SMALL LETTER ER      
p р U+0070 U+0440 ! LATIN SMALL LETTER P           CYRILLIC SMALL LETTER ER      
e е U+0065 U+0435 ! LATIN SMALL LETTER E           CYRILLIC SMALL LETTER IE      
r r U+0072 U+0072   LATIN SMALL LETTER R           LATIN SMALL LETTER R          
) ) U+0029 U+0029   RIGHT PARENTHESIS              RIGHT PARENTHESIS             
So that shows the first letter of the two strings is different (with a !) and you see that while the "good" one is a normal ASCII/Latin 'T', the other is a Cyrillic letter that, in many fonts, resembles a T but isn't one. -- Finlay McWalterTalk 02:04, 19 November 2015 (UTC)[reply]
Thanks Finlay. I'm surprised there no common and easy resource to examine these – which leaves people like me, who are quite unable to say "I just whipped up a python script", in the dark!--Fuhghettaboutit (talk) 03:52, 19 November 2015 (UTC)[reply]
This Cisco technical document, Homoglyph Advanced Phishing Attacks, notes that these kinds of characters are used as part of a very modern and increasingly-common security attack: the IDN homograph attack. The Cisco page also has links to some additional resources, including scripts and detailed explanations. Nimur (talk) 06:38, 19 November 2015 (UTC)[reply]
If you copy the URLs that link to the false titles and paste them in a text editor, you'll find that Тhе Gаmе (rарреr) is actually %D0%A2h%D0%B5_G%D0%B0m%D0%B5_(r%D0%B0%D1%80%D1%80%D0%B5r) and List оf dесеаsеd hiр hор аrtists is List_%D0%BEf_d%D0%B5%D1%81%D0%B5%D0%B0s%D0%B5d_hi%D1%80_h%D0%BE%D1%80_%D0%B0rtists in percent encoding. IE and Edge will display the URLs as such. --Paul_012 (talk) 15:57, 19 November 2015 (UTC)[reply]

Why FOLLOW sets doesn't contain epsilon?

Could anyone give an intuitive explanation for the reason why FOLLOW sets doesn't contain epsilon?It might be better if you can also provide an example.JUSTIN JOHNS (talk) 09:10, 19 November 2015 (UTC)[reply]

I haven't studied this stuff in a long time, but it's not clear to me what ε in a FOLLOW set would mean. A nonterminal X can either expand to 0 terminals or to 1 or more terminals; in the former case, ε is in FIRST(X), and in the latter case, the leftmost terminal is in FIRST(X). The substring generated by a nonterminal X can either be at the end of the string or not; in the former case $ is in FOLLOW(X), and in the latter case the leftmost following terminal is in FOLLOW(X). Maybe the role you're imagining for ε in FOLLOW sets is actually filled by $. -- BenRG (talk) 14:24, 19 November 2015 (UTC)[reply]
This was pretty incomprehensible to someone not familiar with the terminology. I'm hazarding a guess that the relevant article is LL parser. --NorwegianBlue talk 13:27, 20 November 2015 (UTC)[reply]

Can a string be accepted during parsing if the input is exhausted but productions are yet to derive?

I would like to know whether a string can be accepted if the input gets exhausted but there are productions still to derive.Also I would like to know can such an issue occur in parsing?JUSTIN JOHNS (talk) 10:12, 19 November 2015 (UTC)[reply]

A string is accepted if and only if it can be generated by the grammar. If you're talking about a situation where the string is "abc", and the parser's state looks something like "a b c . X Y", then the string will be accepted iff both X and Y can generate the empty string. -- BenRG (talk) 14:26, 19 November 2015 (UTC)[reply]

Swelling of old lithium-ion batteries

I've always regarded it to be common knowledge that swelling of old lithium-ion batteries is a normal effect resulting from degradation as they age. This effect has been consistently exhibited in all the old batteries I've discarded over the past ten years, and using the "spin test" to determine that a battery has passed its prime also appears to be common knowledge among acquaintances. However, a Google search today only gives results that say swollen batteries are a dangerous situation resulting either from misuse or malfunction, and that they should be discarded immediately, with no mention of it being part of the normal ageing process. What gives? --Paul_012 (talk) 16:11, 19 November 2015 (UTC)[reply]

Spin test? [15] 220 of Borg 17:48, 19 November 2015 (UTC)[reply]
"Malfunction" may just be another word for normal "degradation as they age". This also applies to people. :-) StuRat (talk) 18:44, 19 November 2015 (UTC)[reply]

Spin test: a battery that is a cylinder...

  -
 | |
  -

...on its side will not spin well, but a battery that is slightly barrel shaped...

  -
 ( )
  -

...will spin very well.

What the spin test does is to identify batteries that are only slightly swollen -- not enough to see with your eyes.

A tiny bit of swelling is not exactly "normal" but it also isn't all that uncommon and it isn't worth worrying about.

What a tiny bit of swelling usually means is that your system either has a tiny bit of overcharging or a tiny bit of overheating.

References:

--Guy Macon (talk) 19:31, 19 November 2015 (UTC)[reply]

Lithium-ion battery convenience link.
That link I added above seemed to apply more to the little flat batteries in mobile phones, but I can see how it would still work. There are of course cases where the battery is grossly swollen, and can explode, or catch fire. Apparently on some laptops battery swelling causes the trackpad button/s to misbehave. [16]
Paul 012, it may be helpful if you can give the exact Google search you did. 220 of Borg 00:00, 20 November 2015 (UTC)[reply]
Google for lithium batteries loose lithium aging. It is in research how lithium batteries "loose" the lithium by aging. --Hans Haase (有问题吗) 11:29, 21 November 2015 (UTC)[reply]

November 20

Networking related

Are there any classful public IPv4 address in use anymore.Which servers in which countries are using them.Are US gov agencies using these addresses.Cannot find answer by websearch.If a private network uses IPv6 will there be any added advantage.103.18.168.149 (talk) 07:47, 20 November 2015 (UTC)[reply]

IP address classes haven't existed since 1993. As for the IPv6 connection, it's best to just read the article and look at the differences with IPv4. One certain advantage is that the network will be futureproof. --71.119.131.184 (talk) 08:31, 20 November 2015 (UTC)[reply]

But networking texts like forouzan stallings mention and describe about classful ipv4 without mentioning the 1993 transition.willsomeone please give pointers that describe the transition was achieved in diffent countries .Please elaborate or give a source.Do they mention classful addressing to eplain subnetting103.18.170.20 (talk) 15:15, 20 November 2015 (UTC)[reply]

IP Address classes has to do with assigning IP blocks. They are not assigned by class anymore. Class is gone and has been for over a decade. Now, many people incorrectly use the term "class" to refer to the size of an IP block. It does create confusion for those who actually know what an IP class is. 209.149.115.177 (talk) 16:06, 20 November 2015 (UTC)[reply]
The texts you refer to are by authors' names and not title or edition, so it is difficult to tell if you are working from outdated information (which I suspect is the case). Try reading this Wikipedia article for more info - Classful network. Owlster59 (talk) 16:18, 20 November 2015 (UTC)[reply]

Thanks for your guidance.103.18.169.5 (talk) 16:42, 20 November 2015 (UTC)[reply]

November 21

reading / writing a file with javascript

Hi, I'm trying to write a simple game in html/css/javascript and want to be able to read / write the score, etc to/from a file, is this at all possible? I know that visualbasic scripting has Scripting.FileSystemObject. Any help/comment would be appreciated.

(ps. I have no option to google or download stuff from other sites.) — Preceding unsigned comment added by Bejacobs (talkcontribs) 11:01, 21 November 2015 (UTC)[reply]

Javascript in a web browser doesn't have untrammelled access to the user's file system, as this would be a massive security problem. The new File API allows users to grant limited access to some pages, although (because it's new) support for it is patchy. If you just want persistent storage on the client (e.g. to save state, which can be recovered the next time someone plays the game) you can use the localstorage API or even just store what you need in a cookie. -- Finlay McWalterTalk 11:09, 21 November 2015 (UTC)[reply]
If, on the other hand, you were writing in Javascript but on the server rather than the client, environments like NodeJS have a filesystem API. -- Finlay McWalterTalk 11:12, 21 November 2015 (UTC)[reply]