Jump to content

Wikipedia:Reference desk/Computing

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 110.22.20.252 (talk) at 03:55, 11 November 2017 (→‎using two laptops in tandem). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


November 4

Program/protocol to resume transfer after a dropout without starting again from the beginning

If I have a large file say in terabytes and I want to transfer it from one computer to another computer in the other side of the world using the internet. What program or protocol can I use to avoid restarting the transfer if I have a dropout which last for hours on end? 110.22.20.252 (talk) 00:12, 4 November 2017 (UTC)[reply]

BitTorrent Andy Dingley (talk) 00:35, 4 November 2017 (UTC)[reply]
TeamViewer is a remote-desktop protocol; it supports file transfers with resuming. HTTP supports resuming, but you need to install a HTTP server on the remote machine, and use wget or curl to fetch the file(s). In either case, I would suggest using 7-zip, or similar, to compress and split the file into 100MByte chunks. This means that if the resume fails, there is less to fetch again. LongHairedFop (talk) 12:19, 4 November 2017 (UTC)[reply]
rsync over ssh - details -- Finlay McWalter··–·Talk 16:08, 4 November 2017 (UTC)[reply]
BTW, if you are doing this between two home computers, then be aware that it's the upstream speed of the remote computer that's most likely to limit the speed. A 16Mbit/sec connection will take about 7¼ days to transfer 1TByte of data. You might be better splitting the file(s) and burning them onto blu-ray disks. A blu-ray disk can hold between 50GB and 128GB of data. LongHairedFop (talk) 16:38, 4 November 2017 (UTC)[reply]
team viewer sounds like the way to go, easy to setup and use with file transfer resuming over outages. 110.22.20.252 (talk) 01:38, 5 November 2017 (UTC)[reply]

November 5

Text on a computer

Can someone give me a rough idea of how are computers programmed to display text at the lowest level? —Polyknot (talk) 03:12, 5 November 2017 (UTC)[reply]

Our article on text mode have some details. WegianWarrior (talk) 10:32, 5 November 2017 (UTC)[reply]
Even simpler than that is the Hitachi HD44780 LCD controller, which is very widely used, for devices from coffee machines to Arduino and upwards. Andy Dingley (talk) 12:33, 5 November 2017 (UTC)[reply]
Video card and Graphics Device Interface might also be useful. "The lowest level" is not a very well-defined term these days, as there's a great deal of "stuff" going on between the API call that the programmer uses and the actual appearance of text (or other content) on the monitor (or other display device). Tevildo (talk) 12:37, 5 November 2017 (UTC)[reply]
  • That would depend somewhat on what the OP means by "lowest level". I remember old single-board computers and PC text mode, which is what text mode covers. This is "the lowest level" as the simplest sort of display hardware.
Looking at today's computers though (even phones) there are bitmapped graphic displays everywhere and so the question could also be read as "How does my current device show text?", which would be more about "video card" and topics like bitmap fonts vs. TrueType. A far higher level of technology, yet the "low level" within a contemporary device. Andy Dingley (talk) 12:59, 5 November 2017 (UTC)[reply]
You might like this Steam Driven Poetry Machine where he explains how it all works too, :) Dmcq (talk) 14:13, 5 November 2017 (UTC)[reply]

I mean, does someone have to design a font for each character pixel by pixel and then turn that into code? Hope that helps. —Polyknot (talk) 16:13, 5 November 2017 (UTC)[reply]

No, they're not even described by pixels nowaadys but by curves and hints like this line has to be the same width as that line. See [1] for a description of designing a typeface. Dmcq (talk) 16:48, 5 November 2017 (UTC)[reply]
What about on the command-line/DOS etc was it done pixel by pixel? —Polyknot (talk) 18:37, 5 November 2017 (UTC)[reply]
Yes. In the old days, the characters would be stored in ROM (or generated directly by the hardware logic) as raster fonts. Text mode (already linked) is the relevant article for this application. Tevildo (talk) 19:46, 5 November 2017 (UTC)[reply]
So let’s say for arguments sake that I was building a new OS in C. I can’t just use printf; somewhere I have to define how the characters are to be drawn or access them from the ROM as mentioned? — Preceding unsigned comment added by Polyknot (talkcontribs) 21:27, 5 November 2017 (UTC)[reply]
Yes, you need to know how to tell the video card to display text where you want it. For a VGA-compatible card in text mode, you just need to call INT 10h and the hardware does the rest. For more modern cards and for graphic modes, you'll need to write a suitable driver to control it. Tevildo (talk) 22:19, 5 November 2017 (UTC)[reply]
  • If you did this, say 30 years ago on a PC clone, you would still be able to use the INT 10H BIOS services. A section of memory would be allocated as the screen text buffer. Writing ASCII codes into the bytes of this array would display text mode characters on the screen - done for you by the video card, without needing the processor to do much more, or anything to be coded in your app or OS.
There was no need to define the character glyphs for this, as they'd already been loaded by the video board makers. With their limited (typically 8×8) dot resolution there was little scope for differences between them, so there might be the odd difference, but not much. ASCII fits into 7 bits, so there was also room for another 127 characters of a high range character set above this, including the IBM PC's novel and distinctive box drawing characters.
About this time, I was trying to write an analogue data capture system needing a real-time multi-channel colour display. I used 24 channels, each with three vertical bar graphs (a graph and a set of upper/lower marks). As the PCs of the day weren't quick enough to do this in a bitmapped graphic mode (which would have needed the processor to draw each new dot) I did it in text mode. Redefining 24 of these as bar graph characters (a bar that was 18, 28 ... 78, 88 full, and two other sets for the tick marks) meant that I could now draw pixel-level graphics (at least graphics of bar graphs) very fast, at text-mode speeds, rather than the slow bitmapped graphic speeds. Andy Dingley (talk) 00:23, 6 November 2017 (UTC)[reply]
Describing the fonts using an array of dots was the least of the problems in old computers. Just reading a keyboard was far more bother with having to cope with the order of the letters and the various modes and cope with key bounce and setting leds and multi key characters. Dmcq (talk) 13:43, 6 November 2017 (UTC)[reply]
Beside the already mentioned text mode, it might be an idea to reengineer the Color Graphics Adapter (CGA). Computers with 1 kb of RAM only, like the ZX81 had its ZX81 character set predefined in the ROM, which contains the whole firmware and the font definition of the charset as bitmap is just a part of it. The RAM could only store character numbers in the display section of the RAM. Generating the TV picture required to read the characters from the display section of the RAM, point to the ROM address where the character in the ROM was defined and queue the 8 bits to the video signal. As there were 128 chars defined only, the most significant bit tell just to invert the font information for inverted characters. The CPC464 came with 64 kB of RAM, and an ASC standard aligned charset, not compatible to the ZX81. The video section of the RAM were located in the last quarter (=16 kB) of the RAM. Depending on the graphic mode, the char needed to be converted for the RAMDAC, which had only to point to the color palette. 2 to 16 instant colors where displayable. Mode 2: One color background=0, one color to print=1, 80 chars per line. Mode 1: 4 colors, 40 chars per line. Mode 0: 16 colors, 20 chars per line. Each Mode, 25 text lines. This is similar to the CGA adapter of the PC. The RAM stored exactly the bitmap to be displayed on the screen. Printing a char, the CPC made a copy of the chars bitmap font to the video memory. Including the C64, all these home computers had 8x8 pixel bitmap fonts only. The CPC464 could not get text directly from the screen, due it was printed as bitmap graphic. To recognize a char from the screen was the reason for this horrible keyboard key, labeled with "COPY" next to the cursor keys. The CPC464 and successors came with the feature to a customize the font. To cache the fixed font to the RAM, the command SYMBOL AFTER n was used. 0…255 was valide for n. Chars higher 'n' were copied to and taken from RAM. The command SYMBOL char_number,1st-Bitmap-Byte,2nd-Bitmap-Byte,3rd-Bitmap-Byte,4th-Bitmap-Byte,5th-Bitmap-Byte,6th-Bitmap-Byte,7th-Bitmap-Byte,8th-Bitmap-Byte customized a characters font. Note, the right bit an last byte are the space to the neighbor char. This is what the article text mode is talking from. --Hans Haase (有问题吗) 20:07, 6 November 2017 (UTC)[reply]
See p.181, 274, 299 --Hans Haase (有问题吗) 23:48, 6 November 2017 (UTC)[reply]

Cryptographic key vs. encryption key

Hi. I was wondering if there is a difference between a cryptographic key and an encryption key. A question, put forth in the chatroom of Stack Overflow and later, in Super User, describes this situation:

An editor working on a computing article, replaces all instances of "encryption key" with "cryptographic key" with the rationale that a cryptographic key can also be used for decryption. After replacing many, he/she runs into the phrase "the first unique encryption key", whereupon he/she reverses all his/her changes of this certain type. Ouch!

But why?

As far as I can tell, they are synonyms and although the fact of metonymy must have discouraged him/her from making the change in the first place, why suddenly going through the tedious task of reversing it? Must "encryption keys" be unique and "cryptographic keys" non-unique?

5.219.19.143 (talk) 14:34, 5 November 2017 (UTC)[reply]

In symmetric-key cryptography, a key is a key. One key does all functions.
In asymmetric or public-key cryptography, the functions of encryption and decryption are separated into a "key pair" of two related keys (which are both still "cryptographic keys"). The encryption and decryption keys are now distinct: you can only use each of them for that one part of the operation, so turning a message from plaintext to ciphertext and back again to plaintext will need both. Because the encryption key cannot decrypt, it's possible to publicise it (so that anyone can send you an encrypted message, which only you can read).
In some (but not all) systems, a key pair can also be used in reverse: i.e. a decryption key for messages from A→B could be used as an encryption key for messages from B→A, and A's original encryption key would be able to decrypt them. Andy Dingley (talk) 14:47, 5 November 2017 (UTC)[reply]
I hear you loud and clear. But I don't see how it make any difference in this case. If what you say was the factor, the editor must have reversed decision upon reaching "decryption key", not "first unique encryption key". Or at least would have tried the opposite, i.e. changing the "cryptographic key" into "encryption key". 5.219.19.143 (talk) 17:04, 5 November 2017 (UTC)[reply]
It is done purely for decency's sake. It is a joke intended to say editors have no idea what the hell they are editing. FleetCommand (Speak your mind!) 13:09, 8 November 2017 (UTC)[reply]
"Cryptographic key" is less specific than "encryption key". The former may refer to a key used in authentication (e.g. a MAC key, a signing key, a signature verification key). "Encryption key" often refers to a key in a symmetric-key cipher, whether the particular operation concerned is encryption or decryption. Sometimes "cipher key" is used with the same meaning, but "encryption key" is common and well-accepted usage. "Encryption key" is also used in the context of public key cryptography to refer to the public key (in a key pair) used in encryption. --98.115.54.114 (talk) 01:58, 9 November 2017 (UTC)[reply]

Font creation (re-creating an existing font)

I'm looking for a font creation tool where I could insert letters from scans of a 70s book series to electronically re-create that font for non-commercial, electronic use at home. Basically, I wanna scan the book covers, cut out and crop to the individual letters with an image editor (Paint Shop Pro, in my case), then insert the individual scanned and cropped letters into a fontmaker where I can easily manipulate kerning between letters, and finally export a font file or format that I can use to type with both in OpenOffice and especially Corel Paint Shop Pro X2.

Eventually, the font would have to consist of three "styles" in the end: One for the "white" version of the title font (that could be the final electronic font's "regular" version), one for the "dark" version of the title font (which could be on "bold"), and a light one for the writer's name placed over the book's title (which could be on "italic").

Are there any free and easy-to-use tools you could recommend? Note: I'm not on a smartphone, I need an online or desktop tool for Windows 10. --79.242.202.112 (talk) 17:31, 5 November 2017 (UTC)[reply]

I think you can use one of these tools. Ruslik_Zero 18:50, 5 November 2017 (UTC)[reply]
Bear in mind that typefaces ('font' refers to one of the various sizes, weights etc. of a typeface) used for printing were/are designed and created by individuals type designers and may well still be the intellectual property of their designer or of an organisation that has acquired those rights, so copying and manipulating them might lead to legal problems, whether or not you intend any commercial exploitation (I Am Not A Lawyer).
Similarly, distinctive lettering designed for a particular set of book covers (and, for example, LP covers) will usually been designed by an artist for that purpose, and that artist or publisher (or their heirs) will likely still own the rights to it. {The poster formerly known as 87.81.230.195} 90.200.138.27 (talk) 00:20, 6 November 2017 (UTC)[reply]
I'm pretty certain that the legal situation when I'm putting together a font this way to home-print them on a poster and hang it on my wall in my home is the same as even just scanning a cover in the first place. It's when I'm starting to publically distribute a TTF or OTF font file (such as on the web) when things are getting iffy. --79.242.202.112 (talk) 02:02, 6 November 2017 (UTC)[reply]
Well, your certainty is not proof of what's legal, and we aren't allowed to advise here on what is. However, you may find Intellectual property protection of typefaces interesting. --69.159.60.147 (talk) 07:47, 6 November 2017 (UTC)[reply]
Thank you for that link. Via those interwiki links there, I've found that my country didn't even have a copyright law for typefaces up until 1973. The 1973 law, which is the only one that exists for typefaces that only exist in print, states that it's only protected for 10 years since its creation/first publication in general, and at the end of those ten years, you can pay money to have it protected for another 15 years at outmost. Since the typeface has been in use since at least 1970, it's definitely fully legal to do this with this font since 1995, and even to publically share the resulting electronic font online. It would be different if I was to copy an existing electronic font, as my country classifies those as "computer programs" which have far heavier copyright protection. --2003:71:4E07:BB22:C4C6:C0DB:6D12:9E8 (talk) 13:55, 7 November 2017 (UTC)[reply]
'Oh! let us never, never doubt; What nobody is sure about!' - Hilaire Belloc on the Microbe. Dmcq (talk) 11:49, 7 November 2017 (UTC)[reply]
You don't create fonts by scanning them. They aren't little photos. They are a set of vectors. You will have to scan the photo, then trace it in a vector editor, and then save those vectors as a single letter in your new font set. Further - it isn't really that easy. What you actually do is create glyphs and combine those to create letters. Then, you have to add hinting to ensure that important parts of the letters don't vanish when the font size is reduced. Yes - in the very old days, fonts were raster graphics. They aren't anymore. 209.149.113.5 (talk) 13:43, 7 November 2017 (UTC)[reply]
I know that fonts are not rasters but vectors. I've had a few looks at Glyphr Studio by now. Looks like I'll scan and crop the letters with a transparency channel in PSP, output them as PNG, use a batch converter to convert them to SVG, and then load every single SVG glyph into Glyphr Studio. --2003:71:4E07:BB22:C4C6:C0DB:6D12:9E8 (talk) 13:55, 7 November 2017 (UTC)[reply]

November 6

Setting URL for Local Server

I'm running RStudio on a local server in my home office (Linux). It's configured to listen on port 8787, so to use it, I type "server.local:8787" into my web browser. Is there a way to assign an alias URL such as "server.local/rstudio" that I can use instead of the port number? This is all on my local network, and would just be for me to use. OldTimeNESter (talk) 21:38, 6 November 2017 (UTC)[reply]

Is there a reason you aren't configuring to use the standard port 80? If it used port 80, you could just type "server.local". Otherwise, any alias URL you attempt to use such as "server.local/rstudio" is going to access port 80 and not find anything listening on that port. (Unless there is some other program listening on port 80 and that's why you're using 8787 -- if that's the case you'd need to set up the alias in whatever that other program is.) You could possibly create an alias on each client that uses the server, but that seems rather tedious and error prone. CodeTalker (talk) 23:53, 6 November 2017 (UTC)[reply]
RStudio uses port 8787 by default, and I actually have another program using port 80. I can remember accessing programs built around Apache Tomcat without typing in the port number; perhaps it only works there? OldTimeNESter (talk) 04:17, 7 November 2017 (UTC)[reply]

Upgrading firefox on Slackware xfce

Hello,
I installed slackware 14.2, 64 bit on my laptop a few minutes ago. I got everything working, but the installation came with an older version of firefox. I have used linux mint before this, but I am fairly new to linux. In about section, the firefox is displayed as "firefox ESR 45.2.0. Firefox is up to date." How do I upgrade it?

Also, how do I install softwares from a .tar.bz2 file?
The official mozilla site says there is a 56.0.2.tar.bz2 version of firefox for my linux. How do I install it? Thanks a lot in advance. 59.94.7.66 (talk) 22:12, 6 November 2017 (UTC)[reply]

The current version of tar (computing) on my Linux machine will read a .tar.bz2 file directly, but you can always use the option -j to make sure it understands what to expect. Or is it that you're not familiar with using tar at all? --69.159.60.147 (talk) 03:56, 7 November 2017 (UTC)[reply]
Thanks for the reply. Yes, I have zero experience with actual linux, as Linux mint was totally automated/windows like. So, what should I do now? (I am the OP, I've dynamic IPs) 117.200.201.99 (talk) 13:15, 7 November 2017 (UTC)[reply]
The bz2 extension means that it is bunzipped. You can use bunzip2 to unzip it. The usage is "bunzip2 file_to_unzip.bz2". In your case, you will end up with a file that has a tar extension. That is similar to a zipped file. You untar it using the tar command. The usage is "tar x file_to_untar.tar". There are many flags people like to use, such as v for verbose and f for file archive mode (keep permissions and such). So, you will see most people do something like "tar xvf file_to_untar.tar". Most versions of tar have bz2 built in. The z flag will unzip a bz2 file so tar can untar it. So, in your case, you only really need to use "tar zxvf your_file.tar.bz2". You will get a set of directories with all the files. But, that is the easy part. Now, you need to compile the program. The program should come with a "readme" file somewhere in the archive. Read that to get proper instructions for installation. Most programs are written in C++, so they have a similar installation process: First, you run "configure", then you run "make", then you run "make install". Unfortunately, it isn't that easy. What really happens is that you run configure, find out you are missing a dependency, and then spend time hunting down and installing that dependency. Then, you run configure again, find out you are missing another dependency, and you have to get that installed. After a dozen or so rounds of this, you might be lucky to get it to configure. Then, you run make to find out that the version of one of your programs is either too old or too new. You have to remove it, break other stuff already installed, and install the proper version. Then, run make again and go through the same routine. If you ever get it to make, you can run make install and find out that it attempts to install to a directory that you don't have permission to use. So, you have to start all over as root and risk dorking up your entire operating system. Then, you realize that this is specifically why people stick with Linux distributions that have good package managers. I use RedHat (and CentOS) because I work on professional systems. I feel most home users are still into Ubuntu (a Debian knock-off). Both RedHat and Ubuntu have very good package managers. In RedHat, if I want to install the latest version of Firefox that has been cleared by RedHat, I run "dnf install firefox" and it will take care of everything for me. Warning: Not all package managers are equal. I've worked with people who use other flavors of Linux and, as an example, one used Arch Linux. He spent days fighting with package conflicts. I haven't once had a package conflict with RedHat. But, there are many programs that I can't install because they aren't in the package manager. Arch has practically everything in their package manager. Which do you want? A package manager with everything you could think of and plenty of conflicts or one that is highly regulated? 209.149.113.5 (talk) 13:36, 7 November 2017 (UTC)[reply]
While I was searching for packages, and package managers and firefox, I found out on freebsd it takes only "pkg install firefox" to install it, or any other software. Is there a similar way to do this on slackware? I think freebsd takes care of dependencies. I do not need a lot of softwares, I would require only firefox to be updated I think. I can use the pre-installed softwares like wine, xine, mplayer, GIMP and others. Is there any simple way to do that? 117.200.201.99 (talk) 17:32, 7 November 2017 (UTC)[reply]
I thoroughly read https://www.slackbook.org/html/package-management.html but it was not helpful at all. 117.200.201.99 (talk) 17:34, 7 November 2017 (UTC)[reply]
I last used Slackware around 1995. At the time, the point was that the user had to compile and install everything from source. It was very anti-package manager. I don't know what the current state of the project is. 209.149.113.5 (talk) 17:44, 7 November 2017 (UTC)[reply]
It looks like Slackware officially got a package manager in version 12. It is the pkgtool system. 209.149.113.5 (talk) 18:02, 7 November 2017 (UTC)[reply]
I am still unable to install/update it. Can you provide me link to help page of that package manager or something like that? 117.215.62.146 (talk) 11:43, 10 November 2017 (UTC)[reply]

November 7

SMS email server

Is there a way to create a "virtual" email server, on an Android phone, such that

  • an email client can access the SMS messages as if they were plain text emails,
  • sending an email to the address (e.g.) +447777777@localhost would send an SMS message to the given number?

The crux of the matter is that I would like an email client that would handle SMS messages in a very similar way to how it handles email, potentially with a combined inbox. What I propose seems plausible enough, but I see no evidence of it being done.--Leon (talk) 17:44, 7 November 2017 (UTC)[reply]

Most, if not all, SMS services have an email-to-SMS gateway. For example, if you want to send a txt to a Verizon user with phone number 123-456-7890, you send an email to 1234567890@vtext.com. You could have the recipient in your address book with that email address. You see it as outgoing email. They see it as an SMS message. I don't believe there is a standard for replying to those sort of text messages. So, they may very well not come back as emails. But, half the work is done: email to text. You just need text to email. 209.149.113.5 (talk) 18:06, 7 November 2017 (UTC)[reply]
I tested it. I emailed my own Verizon number. Then, I replied to the txt from my phone and it showed up in my email as a reply. So, in that case, I can use my standard email account to send/receive text messages. 209.149.113.5 (talk) 18:09, 7 November 2017 (UTC)[reply]
Interesting...does anyone know of the gateway for Three UK? I have found old websites with a gateway, but they don't work.--Leon (talk) 17:41, 8 November 2017 (UTC)[reply]
Did you try the phone number at three.co.uk? 209.149.113.5 (talk) 18:42, 8 November 2017 (UTC)[reply]
Yes, and it went undelivered, and I received a message to that effect in my email.

Converting Wikimapia distance lines into KML files (for Template:Attached KML or so)

Is it possible to convert a WikiMapia distance measurement URL (such as http://wikimapia.org/#lang=en&lat=-15.774134&lon=-47.916291&z=17&m=b&v=2&gz=0;-479177713;-157770563;29611;0;0;1032;0;58439) into a Template:Attached KML file (like Template:Attached KML/Overseas Highway)? The syntax looks vaguely similar but I don't know if they actually mean the same thing. Jo-Jo Eumerus (talk, contributions) 17:50, 7 November 2017 (UTC)[reply]


November 8

Fast Combining Of Bitmaps In Memory

I am helping a friend work on a project of his and have encountered a snag and was hoping someone might have some helpful insight. I have a pointer that points to a bunch of bytes describing an image (The bytes come in quads: Alpha/Red/Green/Blue), this pointer is pulled out of memory and must be the final destination for the result. What I need to do is blend in a bunch of other images with the first one so that if you were laying them over top of each other, the transparencies would reinforce (If you had 3 images that were slightly transparent, when all blended with the image from memory, they would make that image more transparent than any of the inputs, in proportion to how transparent they were) and the colours would combine based upon their intensity. To do this one time is quick enough, but the end goal is to have the images to be blended in change and use a fresh version of the base image every frame of the application (which is, roughly, 40FPS), however, this method of doing things is not nearly fast enough and causes everything to lurch to a halt. So, is there any way to blend the images in that enables me to do it quickly and, if the data structures have to change, quickly dump everything back into the original pointer format (which must remain as is)?24.3.61.185 (talk) 00:17, 8 November 2017 (UTC)[reply]

Really if you have a recent CPU you should be able to do millions of operations like this in 25 ms. Sof if it grinds to a halt maybe you are using something that does not run on the hardware. Perhaps you will need to change languages to something that actually compiles to machine code. Modern Intel/AMD processors can use the the "PADDB xmm1, xmm2/m128" instruction to add sets of bytes to go even faster. The transparency would also add. It may also be possible to get your video card to overlay transparencies. Graeme Bartlett (talk) 01:37, 8 November 2017 (UTC)[reply]
I'm using C++, it does compile to machine code, it is on a modern CPU. The problem is, probably, that it is not using the GPU since it is a bit of a hack, something I was hoping to get around or find a library I could use to do it, since I'm not overly acquainted with this area, I was hoping for some pointing in the direction of what someone might try using (which would be rather helpful, but that's on me for not saying more, I suppose). At any rate, you are making some assumptions about the number of operations; it is easy to get into millions of operations when dealing with images (and processing this isn't the only thing my cpu is doing per frame - in short, thank you for the suggestions that would apply if your assumptions about my project panned out and I was doing it in a noncompiled language on a twenty year old computer using really small images).24.3.61.185 (talk) 03:03, 8 November 2017 (UTC)[reply]
You may have to turn optimization on, and tell the compiler for what CPU you are compiling. You may be able to see the codes output from the compiler to see if it is making efficient Do you know the exact formula that you want to use to combine for each byte value, and how the transparency interacts? I suspect you may want to multiply (255-α)/255 times the RGB code and sum for all layers. But you could just add, and it could be quick if that's what you want to do. What should happen if the intensity saturates (ie ≥255)? In assembler, you can use PADDUSB-sse instruction to add an sse register byte by byte into memory, stopping at 255. [2] Graeme Bartlett (talk) 03:42, 8 November 2017 (UTC)[reply]
Are you doing the compete job one element at a time or are you doing it by blending in one picture at a time? It may be worth doing a loop which does the job for a thousand elements at a time which would use the first level cache efficiently. Dmcq (talk) 13:33, 8 November 2017 (UTC)[reply]
I have a set of nested loops that, in order, loop through the images, then the y coordinates, then the x; what would be a better way to structure this? The innermost loop does the blend on each of the four channels for the pixel. The blending equation takes two bytes (the byte for the current value of the image in memory, the byte for the current image being blended into the image in memory) and gives out a byte (the new value for the image in memory). Since the blend equation gets called a lot, I made a pointer p that, given bytes b1 and b2, has p(256 * b1 + b2) point to the value of the blend equation for those bytes. Doing a few calculations, my ideal situation, I realized, would require processing several billion pixels a second, which seems unlikely to happen without framerate issues, but, currently, I'm noticing severe frame rate issues with only a few images being blended in. For example, 1 image blended keeps me at 40 frames, 2 images at 38, 3 at around 27, 4 at 20, and 5 at 11; this seems way too steep of a fall off (I expected a more linear drop in frames), but I don't have access to all of the source for what I am working with (which is why I am stuck doing it this way). If I rewrite the for loop to use openmp to parallelize it over multiple cores, I get a framerate drop off of, roughly, the drop from blending 1 less image (which, again, isn't what I expected). From what I'm seeing with lag using a small number of images, I feel like there has to be a way to get more performance that doesn't require learning CUDA/OpenCL for a single use. (I can work at a resolution of 640x480, at 5 images that's around 70 million pixels, ideally, I would be happy if I could get 10-15 images blended without lag...). Out of curiousity, since most of the images to be blended can be assumed to be radial gradients that are to be fixed at various points of the memory sourced image, and since such images don't lose a lot from a rescaling, do you think if I downscaled all the images by a factor of four on each axis (so 1 / 16th), then blended them, then rescaled the memory image back up to full size before sending it on its way that I would see a decent increase in performance? (I'm guessing so, but I'm also guessing that this could still hit some walls).24.3.61.185 (talk) 18:03, 8 November 2017 (UTC)[reply]
Assuming that pixels with increasing x are contiguous then the easiest (!) change might be loop over y: loop over the images; loop over x; do a pixel. If you can do a number of pixels at a time in that inner loop over x that would also be good. Another option would be loop over y: loop over x: loop over the images: do a pixel. Dmcq (talk) 18:22, 8 November 2017 (UTC)[reply]
Thank you:-) Just for my own sake of learning, why would you do y then images rather than images than y for the loop?24.3.61.185 (talk) 18:47, 8 November 2017 (UTC)[reply]
Because doing it that way you won't have lines of your source and target being thrown out of the cache and then reloaded for the next image. If you have a few images doing it an image at a time triples the L3 cache or DRAM accesses. Dmcq (talk) 19:04, 8 November 2017 (UTC)[reply]
You say that this is a one-time thing and that you are "blending" images. I would write an imagemagick command that loads both images, resizes them to the same size (resize, crop, whatever) and then "blends" them. There are many options for blending. Finally, output the result. You won't likely write a process that is faster than what imagemagick already does. As a test, I loaded two extremely large photos and used the imagemagick utility composite: composite -blend 50 DC0142.jpg DC0143.jpg blended.jpg. I got the two photos blended together in far less than a second - not even half a second. 209.149.113.5 (talk) 18:41, 8 November 2017 (UTC)[reply]
That would be a really good suggestion, but by "one time thing", I mean that I will only need to do what I am doing this way one time, thus, learning GPU parallel stuff would be a real bummer, not that I only need the images to be combined one time. For those curious, my friend coded a game, years ago, but lost the source code. He used static lights that were all blended into an overlay, but he always wanted to add dynamic and moving lights into it. I found that I could get the game to execute a call from a dll each frame and pass the overlay into it (and that I could inject the data for the dynamic lights into each scene (coordinates, what they attach to, etc.)). So, all that was left was to blend them in, which, sadly, caused a massive performance hit and forced us to be stuck doing everything on the CPU without any ability to tweak any of the major structure of the game code, thus, this hack solution.24.3.61.185 (talk) 18:47, 8 November 2017 (UTC)[reply]
It appears that you want to dynamically shade the display. You need to do that using vector operations, not for loops. That is why you need to do it in the video card, not the CPU. 209.149.113.5 (talk) 18:55, 8 November 2017 (UTC)[reply]
Yes, but I don't know how to take over the game's display, but I can hijack a bitmap from the game and work with that, so I'm stuck with the CPU at the moment.24.3.61.185 (talk) 19:19, 8 November 2017 (UTC)[reply]
What is the blending equation? Your array of bytes p(256 * b1 + b2) will take up 65K and so cause problems in the L1 data cache. Dmcq (talk) 18:50, 8 November 2017 (UTC)[reply]
The blending equation is (a + b) / (1 + ab), with a and b being the value of the byte divided by 255.0. I was doing it in the loop, but it went smoother removed - it works out to 3 casts, 2 adds, a multiply, and a divide per channel with 4 channels. *Since the overlay is just solid colour with constant alpha, a quick operation on the alpha channel of the final output puts the alpha to where it needs to be, but allows me to do the repeated blending without using a different equation (it ran smoother doing it that way).24.3.61.185 (talk) 19:23, 8 November 2017 (UTC)[reply]
That's interesting, that's the same formula for tan or tanh of a sum. If one converted to arctan and added and then converted back that would give the same result. Doing that of course wouldn't be sensible - but a lookup table of 256 approximations to arctan could be used and then you just add the values and then you might be able to do some sort of scale and lookup at the end to get the tan of adding the values for the various images. Dmcq (talk) 23:46, 8 November 2017 (UTC)[reply]
What I said there would for a dingle use involve three memory accesses but wouldn't hurt the L1 cache much. It might be better to simply do the arithmetic and be done with it. If you wnat to avoid going to float and back again it can be multiplied out as (0xFF*0xFF*0x100)*(a+b))/((0xFF*0xFF)+a*b), the compiler can figure out what those constants are. In fact it might give better rounding to have 0xfe8000 as the constant on top to round up slightly but not go over 0xFF, 0xFF*0xFF is 0xFE01 Dmcq (talk) 16:07, 9 November 2017 (UTC)[reply]
I'm going to run some tests, today, and see what gives the best results - I have bytes coming in that I have to work, so I'm wondering if it might not be faster to do a lookup with tangents (I can preapply the atanh to all of the images that are going to be used for the dynamic lights (the static lights are generated when the scene loads, though, not pulled from files, which is a little unfortunate). Thank you for all of the help, I'll reply back to see how everything went:-) Thank you so very much, this has been a very helpful experience:-)24.3.61.185 (talk) 18:50, 9 November 2017 (UTC)[reply]

Having problem implementing equation 7 in https://en.wikipedia.org/wiki/Multilateration

[Question moved to WP:RD/Math.] Tevildo (talk) 19:54, 8 November 2017 (UTC)[reply]

November 9

boot manager file

My computer displays the following message:

BOOTMGR is missing

Press Ctrl+Alt+Del to restart

What do I do?

Note: I've search the internet I can find the Boot manager file downloading link. My OS is "Windows 7 Ultimate" 32-bit. I'm using my android phone right now, please give me a direct downloading link for the auto pendrive booting option if ita available.

P. S. This is very important.

119.30.47.177 (talk) 11:02, 9 November 2017 (UTC)[reply]

  • See [3]. If you do not have the original Win7 installation files (installation/recovery DVD, thumbdrive,...), we will not give you a download link (we do not link to copyright violations).
Before attempting anything though, you should backup your data if you have not already done so and are able to do so (meaning: you know how to do it, someone in your family/friends knows how to do it, or you are ready to pay up for it). Reinstalling the OS has a good chance to wipe a good portion of your files.
Also, saying "this is very important" will not get you faster or better answers. TigraanClick here to contact me 12:15, 9 November 2017 (UTC)[reply]
I am not sure why we can not give a link ... from Microsoft itself. Though an installation key is necessary. Ruslik_Zero 20:19, 9 November 2017 (UTC)[reply]
You may have to use another computer to make a boot DVD or CD or USB stick. Don't expect that you can use an android phone to make these! Graeme Bartlett (talk) 22:26, 9 November 2017 (UTC)[reply]

Boot manager file 2

Sorry about this post, my phone is not allowing me to message in the previous post. I possess the original CD, problem is its broken... I said its important because I use it as T.V., therefore I'm stuck. My android phone is not of good quality... Anyway, I possess functional USB ports and the phone is the only way I could download boot manager file into a pendrive... Please help me. 119.30.47.87 (talk) 15:03, 9 November 2017 (UTC)[reply]

It has been a a few years since I fiddled with boot system of windows. But I think there is no way for downloading a boot manager file from internet. Your only option is, find out your product key, and then make a recovery disk. Then repair your system using that recovery disk. —usernamekiran(talk) 11:59, 10 November 2017 (UTC)[reply]

November 10

using two laptops in tandem

I have a HP laptop. The OS is Linux Ubuntu. I am doing some programming in C++. What I need is to have another, identical laptop to work alongside (in tandem) in such a way that the second laptop could be used by someone else who might hopefully help me to resolve a difficult problem with the code. How can it be done? Thanks, --AboutFace 22 (talk) 17:27, 10 November 2017 (UTC)[reply]

I want to give additional details as to what I need. I need both laptops be loaded with the same software, which is of course, trivial. I want the other person to open my C++ code in his laptop, which is again trivial. I want both of us to go down the code which is very long with my explaining the logic of it. Let's say this hypothetical friend will find a bug and makes a correction, I want my code in my laptop reflect that correction and if we do test runs I want the output std::cout to be reflected on both laptops. Thanks, --AboutFace 22 (talk) 17:57, 10 November 2017 (UTC)[reply]

Etherpad is a highly customizable Open Source online editor providing collaborative editing in really real-time.

http://etherpad.org/

https://www.codingmonkeys.de/subethaedit/

https://en.wikipedia.org/wiki/Gobby

110.22.20.252 (talk) 03:52, 11 November 2017 (UTC)[reply]

Who were the Dalton gang, and what did they do?

In Reflections on Trusting Trust by Ken Thompson, he makes reference to a group known as the "Dalton gang": I would like to criticize the press in its handling of the 'hackers,' the 414 gang, the Dalton gang, etc. The acts performed by these kids are vandalism at best and probably trespass and theft at worst.

We have an article on the Dalton Gang, but that article is about Old West Outlaws, not an 80s hacking group. My Google-fu is not helping me. 192.88.255.9 (talk) 17:50, 10 November 2017 (UTC)[reply]

The "Dalton Gang" was a couple 13 year olds (and possibly one older kid who was showing off some scripts he found) at Dalton School in New York. They got into a computer. The point is valid. They didn't "hack." They were what is now called "script kiddies." 209.149.113.5 (talk) 19:53, 10 November 2017 (UTC)[reply]

November 11